NumPy 1.15.0 is a release with an unusual number of cleanups, many deprecationsof old functions, and improvements to many existing functions. Please read thedetailed descriptions below to see if you are affected.
For testing, we have switched to pytest as a replacement for the no longermaintained nose framework. The old nose based interface remains for downstreamprojects who may still be using it.
The Python versions supported by this release are 2.7, 3.4-3.7. The wheels arelinked with OpenBLAS v0.3.0, which should fix some of the linalg problemsreported for NumPy 1.14.
numpy.printoptions context manager.numpy.einsum.numpy.gcd andnumpy.lcm, to compute the greatest common divisor and leastcommon multiple.
numpy.ma.stack, thenumpy.stack array-joining function generalized tomasked arrays.
numpy.quantile function, an interface topercentile without factors of100
numpy.nanquantile function, an interface tonanpercentile withoutfactors of 100
numpy.printoptions, a context manager that sets print options temporarilyfor the scope of thewith block:
>>>withnp.printoptions(precision=2):...print(np.array([2.0])/3)[0.67]
numpy.histogram_bin_edges, a function to get the edges of the bins used by ahistogram without needing to calculate the histogram.
C functionsnpy_get_floatstatus_barrier andnpy_clear_floatstatus_barrierhave been added to deal with compiler optimization changing the order ofoperations. See below for details.
pickle functions are deprecated, in favor of theirunaliasedpickle.<func> names:numpy.loadsnumpy.core.numeric.loadnumpy.core.numeric.loadsnumpy.ma.loads,numpy.ma.dumpsnumpy.ma.load,numpy.ma.dump - these functions already failed onpython 3 when called with a string.ind=[slice(None),0];arr[ind] should be changedto a tuple, e.g.,ind=[slice(None),0];arr[tuple(ind)] orarr[(slice(None),0)]. That change is necessary to avoid ambiguity inexpressions such asarr[[[0,1],[0,1]]], currently interpreted asarr[array([0,1]),array([0,1])], that will be interpretedasarr[array([[0,1],[0,1]])] in the future.numpy.testing.utilsnumpy.testing.decoratorsnumpy.testing.nosetesternumpy.testing.noseclassesnumpy.core.umath_testsnumpy.sum is now deprecated. This was undocumentedbehavior, but worked. Previously, it would calculate the sum of the generatorexpression. In the future, it might return a different result. Usenp.sum(np.from_iter(generator)) or the built-in Pythonsum instead.PyArrayResolveWriteBackIfCopy orPyArray_DiscardWritbackIfCopy on any array with theWRITEBACKIFCOPYflag set, before deallocating the array. A deprecation warning will beemitted if those calls are not used when needed.nditer should use the nditer object as a context manageranytime one of the iterator operands is writeable, so that numpy canmanage writeback semantics, or should callit.close(). ARuntimeWarning may be emitted otherwise in these cases.
normed argument ofnp.histogram, deprecated long ago in 1.6.0,now emits aDeprecationWarning.The following compiled modules have been renamed and made private:
umath_tests ->_umath_teststest_rational ->_rational_testsmultiarray_tests ->_multiarray_testsstruct_ufunc_test ->_struct_ufunc_testsoperand_flag_tests ->_operand_flag_testsTheumath_tests module is still available for backwards compatibility, butwill be removed in the future.
NpzFile returned bynp.savez is now acollections.abc.Mapping¶This means it behaves like a readonly dictionary, and has a new.values()method andlen() implementation.
For python 3, this means that.iteritems(),.iterkeys() have beendeprecated, and.keys() and.items() now return views and not lists.This is consistent with how the builtindict type changed between python 2and python 3.
nditer must be used in a context manager¶When using annumpy.nditer with the"writeonly" or"readwrite" flags, thereare some circumstances where nditer doesn’t actually give you a view of thewritable array. Instead, it gives you a copy, and if you make changes to thecopy, nditer later writes those changes back into your actual array. Currently,this writeback occurs when the array objects are garbage collected, which makesthis API error-prone on CPython and entirely broken on PyPy. Therefore,nditer should now be used as a context manager whenever it is usedwith writeable arrays, e.g.,withnp.nditer(...)asit:.... You may alsoexplicitly callit.close() for cases where a context manager is unusable,for instance in generator expressions.
The last nose release was 1.3.7 in June, 2015, and development of that tool hasended, consequently NumPy has now switched to using pytest. The old decoratorsand nose tools that were previously used by some downstream projects remainavailable, but will not be maintained. The standard testing utilities,assert_almost_equal and such, are not be affected by this change except forthe nose specific functionsimport_nose andraises. Those functions arenot used in numpy, but are kept for downstream compatibility.
ctypes with__array_interface__¶Previously numpy added__array_interface__ attributes to all the integertypes fromctypes.
np.ma.notmasked_contiguous andnp.ma.flatnotmasked_contiguous always return lists¶This is the documented behavior, but previously the result could be any ofslice, None, or list.
All downstream users seem to check for theNone result fromflatnotmasked_contiguous and replace it with[]. Those callers willcontinue to work as before.
np.squeeze restores old behavior of objects that cannot handle anaxis argument¶Prior to version1.7.0,numpy.squeeze did not have anaxis argument andall empty axes were removed by default. The incorporation of anaxisargument made it possible to selectively squeeze single or multiple empty axes,but the old API expectation was not respected because axes could still beselectively removed (silent success) from an object expecting all empty axes tobe removed. That silent, selective removal of empty axes for objects expectingthe old behavior has been fixed and the old behavior restored.
.item method now returns a bytes object¶.item now returns abytes object instead of a buffer or byte array.This may affect code which assumed the return value was mutable, which is nolonger the case.
copy.copy andcopy.deepcopy no longer turnmasked into an array¶Sincenp.ma.masked is a readonly scalar, copying should be a no-op. Thesefunctions now behave consistently withnp.copy().
The change that multi-field indexing of structured arrays returns a viewinstead of a copy is pushed back to 1.16. A new methodnumpy.lib.recfunctions.repack_fields has been introduced to help mitigatethe effects of this change, which can be used to write code compatible withboth numpy 1.15 and 1.16. For more information on how to update code to accountfor this future change see the “accessing multiple fields” section of theuser guide.
npy_get_floatstatus_barrier andnpy_clear_floatstatus_barrier¶Functionsnpy_get_floatstatus_barrier andnpy_clear_floatstatus_barrierhave been added and should be used in place of thenpy_get_floatstatus``and``npy_clear_status functions. Optimizing compilers like GCC 8.1 and Clangwere rearranging the order of operations when the previous functions were usedin the ufunc SIMD functions, resulting in the floatstatus flags being checkedbefore the operation whose status we wanted to check was run. See#10339.
PyArray_GetDTypeTransferFunction¶PyArray_GetDTypeTransferFunction now defaults to using user-definedcopyswapn /copyswap for user-defined dtypes. If this causes asignificant performance hit, consider implementingcopyswapn to reflect theimplementation ofPyArray_GetStridedCopyFn. See#10898.* Functionsnpy_get_floatstatus_barrier andnpy_clear_floatstatus_barrier
have been added and should be used in place of thenpy_get_floatstatus``and``npy_clear_statusfunctions. Optimizing compilers like GCC 8.1 and Clangwere rearranging the order of operations when the previous functions wereused in the ufunc SIMD functions, resulting in the floatstatus flags being ‘checked before the operation whose status we wanted to check was run.See#10339.
np.gcd andnp.lcm ufuncs added for integer and objects types¶These compute the greatest common divisor, and lowest common multiple,respectively. These work on all the numpy integer types, as well as thebuiltin arbitrary-precisionDecimal andlong types.
The build system has been modified to add support for the_PYTHON_HOST_PLATFORM environment variable, used bydistutils whencompiling on one platform for another platform. This makes it possible tocompile NumPy for iOS targets.
This only enables you to compile NumPy for one specific platform at a time.Creating a full iOS-compatible NumPy package requires building for the 5architectures supported by iOS (i386, x86_64, armv7, armv7s and arm64), andcombining these 5 compiled builds products into a single “fat” binary.
return_indices keyword added fornp.intersect1d¶New keywordreturn_indices returns the indices of the two input arraysthat correspond to the common elements.
np.quantile andnp.nanquantile¶Likenp.percentile andnp.nanpercentile, but takes quantiles in [0, 1]rather than percentiles in [0, 100].np.percentile is now a thin wrapperaroundnp.quantile with the extra step of dividing by 100.
Added experimental support for the 64-bit RISC-V architecture.
np.einsum updates¶Syncs einsum path optimization tech betweennumpy andopt_einsum. Inparticular, thegreedy path has received many enhancements by @jcmgray. Afull list of issues fixed are:
np.ufunc.reduce and related functions now accept an initial value¶np.ufunc.reduce,np.sum,np.prod,np.min andnp.max allnow accept aninitial keyword argument that specifies the value to startthe reduction with.
np.flip can operate over multiple axes¶np.flip now accepts None, or tuples of int, in itsaxis argument. Ifaxis is None, it will flip over all the axes.
histogram andhistogramdd functions have moved tonp.lib.histograms¶These were originally found innp.lib.function_base. They are stillavailable under their un-scopednp.histogram(dd) names, andto maintain compatibility, aliased atnp.lib.function_base.histogram(dd).
Code that doesfromnp.lib.function_baseimport* will need to be updatedwith the new location, and should consider not usingimport* in future.
histogram will accept NaN values when explicit bins are given¶Previously it would fail when trying to compute a finite range for the data.Since the range is ignored anyway when the bins are given explicitly, this errorwas needless.
Note that callinghistogram on NaN values continues to raise theRuntimeWarning s typical of working with nan values, which can be silencedas usual witherrstate.
histogram works on datetime types, when explicit bin edges are given¶Dates, times, and timedeltas can now be histogrammed. The bin edges must bepassed explicitly, and are not yet computed automatically.
histogram “auto” estimator handles limited variance better¶No longer does an IQR of 0 result inn_bins=1, rather the number of binschosen is related to the data size in this situation.
histogramdd now match the data float type¶When passednp.float16,np.float32, ornp.longdouble data, thereturned edges are now of the same dtype. Previously,histogram would onlyreturn the same type if explicit bins were given, andhistogram wouldproducefloat64 bins no matter what the inputs.
histogramdd allows explicit ranges to be given in a subset of axes¶Therange argument ofnumpy.histogramdd can now containNone values toindicate that the range for the corresponding axis should be computed from thedata. Previously, this could not be specified on a per-axis basis.
histogramdd andhistogram2d have been renamed¶These arguments are now calleddensity, which is consistent withhistogram. The old argument continues to work, but the new name should bepreferred.
np.r_ works with 0d arrays, andnp.ma.mr_ works withnp.ma.masked¶0d arrays passed to ther_ andmr_ concatenation helpers are now treated asthough they are arrays of length 1. Previously, passing these was an error.As a result,numpy.ma.mr_ now works correctly on themasked constant.
np.ptp accepts akeepdims argument, and extended axis tuples¶np.ptp (peak-to-peak) can now work over multiple axes, just likenp.maxandnp.min.
MaskedArray.astype now is identical tondarray.astype¶This means it takes all the same arguments, making more code written forndarray work for masked array too.
Change to simd.inc.src to allow use of AVX2 or AVX512 at compile time. Previouslycompilation for avx2 (or 512) with -march=native would still use the SSEcode for the simd functions even when the rest of the code got AVX2.
nan_to_num always returns scalars when receiving scalar or 0d inputs¶Previously an array was returned for integer scalar inputs, which isinconsistent with the behavior for float inputs, and that of ufuncs in general.For all types of scalar or 0d input, the result is now a scalar.
np.flatnonzero works on numpy-convertible types¶np.flatnonzero now usesnp.ravel(a) instead ofa.ravel(), so itworks for lists, tuples, etc.
np.interp returns numpy scalars rather than builtin scalars¶Previouslynp.interp(0.5,[0,1],[10,20]) would return afloat, butnow it returns anp.float64 object, which more closely matches the behaviorof other functions.
Additionally, the special case ofnp.interp(object_array_0d,...) is nolonger supported, asnp.interp(object_array_nd) was never supported anyway.
As a result of this change, theperiod argument can now be used on 0darrays.
Previouslynp.dtype([(u'name',float)]) would raise aTypeError inPython 2, as only bytestrings were allowed in field names. Now any unicodestring field names will be encoded with theascii codec, raising aUnicodeEncodeError upon failure.
This change makes it easier to write Python 2/3 compatible code usingfrom__future__importunicode_literals, which previously would causestring literal field names to raise a TypeError in Python 2.
dtype=object, overriding the defaultbool¶This allows object arrays of symbolic types, which override== and otheroperators to return expressions, to be compared elementwise withnp.equal(a,b,dtype=object).
sort functions acceptkind='stable'¶Up until now, to perform a stable sort on the data, the user must do:
>>>np.sort([5,2,6,2,1],kind='mergesort')[1, 2, 2, 5, 6]
because merge sort is the only stable sorting algorithm available inNumPy. However, having kind=’mergesort’ does not make it explicit thatthe user wants to perform a stable sort thus harming the readability.
This change allows the user to specify kind=’stable’ thus clarifyingthe intent.
When ufuncs perform accumulation they no longer make temporary copies becauseof the overlap between input an output, that is, the next element accumulatedis added before the accumulated result is stored in its place, hence theoverlap is safe. Avoiding the copy results in faster execution.
linalg.matrix_power can now handle stacks of matrices¶Like other functions inlinalg,matrix_power can now deal with arraysof dimension larger than 2, which are treated as stacks of matrices. As partof the change, to further improve consistency, the name of the first argumenthas been changed toa (fromM), and the exceptions for non-squarematrices have been changed toLinAlgError (fromValueError).
random.permutation for multidimensional arrays¶permutation uses the fast path inrandom.shuffle for all inputarray dimensions. Previously the fast path was only used for 1-d arrays.
axes,axis andkeepdims arguments¶One can control over which axes a generalized ufunc operates by passing in anaxes argument, a list of tuples with indices of particular axes. Forinstance, for a signature of(i,j),(j,k)->(i,k) appropriate for matrixmultiplication, the base elements are two-dimensional matrices and these aretaken to be stored in the two last axes of each argument. The correspondingaxes keyword would be[(-2,-1),(-2,-1),(-2,-1)]. If one wanted touse leading dimensions instead, one would pass in[(0,1),(0,1),(0,1)].
For simplicity, for generalized ufuncs that operate on 1-dimensional arrays(vectors), a single integer is accepted instead of a single-element tuple, andfor generalized ufuncs for which all outputs are scalars, the (empty) outputtuples can be omitted. Hence, for a signature of(i),(i)->() appropriatefor an inner product, one could pass inaxes=[0,0] to indicate that thevectors are stored in the first dimensions of the two inputs arguments.
As a short-cut for generalized ufuncs that are similar to reductions, i.e.,that act on a single, shared core dimension such as the inner product exampleabove, one can pass anaxis argument. This is equivalent to passing inaxes with identical entries for all arguments with that core dimension(e.g., for the example above,axes=[(axis,),(axis,)]).
Furthermore, like for reductions, for generalized ufuncs that have inputs thatall have the same number of core dimensions and outputs with no core dimension,one can pass inkeepdims to leave a dimension with size 1 in the outputs,thus allowing proper broadcasting against the original inputs. The location ofthe extra dimension can be controlled withaxes. For instance, for theinner-product example,keepdims=True,axes=[-2,-2,-2] would act on theinner-product example,keepdims=True,axis=-2 would act on theone-but-last dimension of the input arguments, and leave a size 1 dimension inthat place in the output.
Previously printing float128 values was buggy on ppc, since the specialdouble-double floating-point-format on these systems was not accounted for.float128s now print with correct rounding and uniqueness.
Warning to ppc users: You should upgrade glibc if it is version <=2.23,especially if using float128. On ppc, glibc’s malloc in these version oftenmisaligns allocated memory which can crash numpy when using float128 values.
np.take_along_axis andnp.put_along_axis functions¶When used on multidimensional arrays,argsort,argmin,argmax, andargpartition return arrays that are difficult to use as indices.take_along_axis provides an easy way to use these indices to lookup valueswithin an array, so that:
np.take_along_axis(a,np.argsort(a,axis=axis),axis=axis)
is the same as:
np.sort(a,axis=axis)
np.put_along_axis acts as the dual operation for writing to these indiceswithin an array.
This is a bugfix release for bugs reported following the 1.14.4 release. Themost significant fixes are:
The Python versions supported in this release are 2.7 and 3.4 - 3.6. The Python3.6 wheels available from PIP are built with Python 3.6.2 and should becompatible with all previous versions of Python 3.6. The source releases werecythonized with Cython 0.28.2 and should work for the upcoming Python 3.7.
A total of 1 person contributed to this release. People with a “+” by theirnames contributed a patch for the first time.
This is a bugfix release for bugs reported following the 1.14.3 release. Themost significant fixes are:
There are also improvements to printing of long doubles on PPC platforms. Allis not yet perfect on that platform, the whitespace padding is still incorrectand is to be fixed in numpy 1.15, consequently NumPy still fails someprinting-related (and other) unit tests on ppc systems. However, the printedvalues are now correct.
Note that NumPy will error on import if it detects incorrect float32dotresults. This problem has been seen on the Mac when working in the Anacondaenviroment and is due to a subtle interaction between MKL and PyQt5. It is notstrictly a NumPy problem, but it is best that users be aware of it. See thegh-8577 NumPy issue for more information.
The Python versions supported in this release are 2.7 and 3.4 - 3.6. The Python3.6 wheels available from PIP are built with Python 3.6.2 and should becompatible with all previous versions of Python 3.6. The source releases werecythonized with Cython 0.28.2 and should work for the upcoming Python 3.7.
A total of 7 people contributed to this release. People with a “+” by theirnames contributed a patch for the first time.
A total of 11 pull requests were merged for this release.
This is a bugfix release for a few bugs reported following the 1.14.2 release:
The Python versions supported in this release are 2.7 and 3.4 - 3.6. The Python3.6 wheels available from PIP are built with Python 3.6.2 and should becompatible with all previous versions of Python 3.6. The source releases werecythonized with Cython 0.28.2.
A total of 6 people contributed to this release. People with a “+” by theirnames contributed a patch for the first time.
A total of 8 pull requests were merged for this release.
This is a bugfix release for some bugs reported following the 1.14.1 release. The majorproblems dealt with are as follows.
The Python versions supported in this release are 2.7 and 3.4 - 3.6. The Python3.6 wheels available from PIP are built with Python 3.6.2 and should becompatible with all previous versions of Python 3.6. The source releases werecythonized with Cython 0.26.1, which is known tonot support the upcomingPython 3.7 release. People who wish to run Python 3.7 should check out theNumPy repo and try building with the, as yet, unreleased master branch ofCython.
A total of 4 people contributed to this release. People with a “+” by theirnames contributed a patch for the first time.
A total of 5 pull requests were merged for this release.
This is a bugfix release for some problems reported following the 1.14.0 release. The majorproblems fixed are the following.
np.einsum due to the newoptimized=True default. Somefixes for optimization have been applied andoptimize=False is now thedefault.np.unique whenaxis=<some-number> will now alwaysbe lexicographic in the subarray elements. In previous NumPy versions therewas an optimization that could result in sorting the subarrays as unsignedbyte strings.The Python versions supported in this release are 2.7 and 3.4 - 3.6. The Python3.6 wheels available from PIP are built with Python 3.6.2 and should becompatible with all previous versions of Python 3.6. The source releases werecythonized with Cython 0.26.1, which is known tonot support the upcomingPython 3.7 release. People who wish to run Python 3.7 should check out theNumPy repo and try building with the, as yet, unreleased master branch ofCython.
A total of 14 people contributed to this release. People with a “+” by theirnames contributed a patch for the first time.
A total of 36 pull requests were merged for this release.
Numpy 1.14.0 is the result of seven months of work and contains a large numberof bug fixes and new features, along with several changes with potentialcompatibility issues. The major change that users will notice are thestylistic changes in the way numpy arrays and scalars are printed, a changethat will affect doctests. See below for details on how to preserve theold style printing when needed.
A major decision affecting future development concerns the schedule fordropping Python 2.7 support in the runup to 2020. The decision has been made tosupport 2.7 for all releases made in 2018, with the last release beingdesignated a long term release with support for bug fixes extending through2019. In 2019 support for 2.7 will be dropped in all new releases. More detailscan be found in the relevantNEP_.
This release supports Python 2.7 and 3.4 - 3.6.
genfromtxt,loadtxt,fromregex andsavetxt can now handlefiles with arbitrary Python supported encoding.parametrize: decorator added to numpy.testingchebinterpolate: Interpolate function at Chebyshev points.format_float_positional andformat_float_scientific : formatfloating-point scalars unambiguously with control of rounding and padding.PyArray_ResolveWritebackIfCopy andPyArray_SetWritebackIfCopyBase,new C-API functions useful in achieving PyPy compatibity.np.bool_ objects in place of integers is deprecated. Previouslyoperator.index(np.bool_) was legal and allowed constructs such as[1,2,3][np.True_]. That was misleading, as it behaved differently fromnp.array([1,2,3])[np.True_].array.size>0.np.bincount withminlength=None is deprecated.minlength=0 should be used instead.np.fromstring with the default value of thesep argument isdeprecated. When that argument is not provided, a broken version ofnp.frombuffer is used that silently accepts unicode strings and – afterencoding them as either utf-8 (python 3) or the default encoding(python 2) – treats them as binary data. If reading binary data isdesired,np.frombuffer should be used directly.style option of array2string is deprecated in non-legacy printing mode.PyArray_SetUpdateIfCopyBase has been deprecated. For NumPy versions >= 1.14usePyArray_SetWritebackIfCopyBase instead, seeC API changes below formore details.UPDATEIFCOPY arrays is deprecated, seeC API changes belowfor details. We will not be dropping support for those arrays, but they arenot compatible with PyPy.np.issubdtype will stop downcasting dtype-like arguments.It might be expected thatissubdtype(np.float32,'float64') andissubdtype(np.float32,np.float64) mean the same thing - however, therewas an undocumented special case that translated the former intoissubdtype(np.float32,np.floating), giving the surprising result of True.
This translation now gives a warning that explains what translation isoccurring. In the future, the translation will be disabled, and the firstexample will be made equivalent to the second.
np.linalg.lstsq default forrcond will be changed. Thercondparameter tonp.linalg.lstsq will change its default to machine precisiontimes the largest of the input array dimensions. A FutureWarning is issuedwhenrcond is not passed explicitly.
a.flat.__array__() will return a writeable copy ofa whena isnon-contiguous. Previously it returned an UPDATEIFCOPY array whena waswriteable. Currently it returns a non-writeable copy. See gh-7054 for adiscussion of the issue.
Unstructured void array’s.item method will return a bytes object. In thefuture, calling.item() on arrays or scalars ofnp.void datatype willreturn abytes object instead of a buffer or int array, the same asreturned bybytes(void_scalar). This may affect code which assumed thereturn value was mutable, which will no longer be the case. AFutureWarning is now issued when this would occur.
There was a FutureWarning about this change in NumPy 1.11.x. In short, it isnow the case that, when changing a view of a masked array, changes to the maskare propagated to the original. That was not previously the case. This changeaffects slices in particular. Note that this does not yet work properly if themask of the original array isnomask and the mask of the view is changed.See gh-5580 for an extended discussion. The original behavior of having a copyof the mask can be obtained by calling theunshare_mask method of the view.
np.ma.masked is no longer writeable¶Attempts to mutate themasked constant now error, as the underlying arraysare marked readonly. In the past, it was possible to get away with:
# emulating a function that sometimes returns np.ma.maskedval=random.choice([np.ma.masked,10])var_arr=np.asarray(val)val_arr+=1# now errors, previously changed np.ma.masked.data
np.ma functions producing``fill_value``s have changed¶Previously,np.ma.default_fill_value would return a 0d array, butnp.ma.minimum_fill_value andnp.ma.maximum_fill_value would return atuple of the fields. Instead, all three methods return a structurednp.voidobject, which is what you would already find in the.fill_value attribute.
Additionally, the dtype guessing now matches that ofnp.array - so whenpassing a python scalarx,maximum_fill_value(x) is always the same asmaximum_fill_value(np.array(x)). Previouslyx=long(1) on Python 2violated this assumption.
a.flat.__array__() returns non-writeable arrays whena is non-contiguous¶The intent is that the UPDATEIFCOPY array previously returned whena wasnon-contiguous will be replaced by a writeable copy in the future. Thistemporary measure is aimed to notify folks who expect the underlying array bemodified in this situation that that will no longer be the case. The mostlikely places for this to be noticed is when expressions of the formnp.asarray(a.flat) are used, or whena.flat is passed as the outparameter to a ufunc.
np.tensordot now returns zero array when contracting over 0-length dimension¶Previouslynp.tensordot raised a ValueError when contracting over 0-lengthdimension. Now it returns a zero array, which is consistent with the behaviourofnp.dot andnp.einsum.
numpy.testing reorganized¶This is not expected to cause problems, but possibly something has been leftout. If you experience an unexpected import problem usingnumpy.testinglet us know.
np.asfarray no longer accepts non-dtypes through thedtype argument¶This previously would acceptdtype=some_array, with the implied semanticsofdtype=some_array.dtype. This was undocumented, unique across the numpyfunctions, and if used would likely correspond to a typo.
np.linalg.norm preserves float input types, even for arbitrary orders¶Previously, this would promote tofloat64 when arbitrary orders werepassed, despite not doing so under the simple cases:
>>>f32=np.float32([[1,2]])>>>np.linalg.norm(f32,2.0,axis=-1).dtypedtype('float32')>>>np.linalg.norm(f32,2.0001,axis=-1).dtypedtype('float64') # numpy 1.13dtype('float32') # numpy 1.14
This change affects onlyfloat32 andfloat16 arrays.
count_nonzero(arr,axis=()) now counts over no axes, not all axes¶Elsewhere,axis==() is always understood as “no axes”, butcount_nonzero had a special case to treat this as “all axes”. This wasinconsistent and surprising. The correct way to count over all axes has alwaysbeen to passaxis==None.
__init__.py files added to test directories¶This is for pytest compatibility in the case of duplicate test file names inthe different directories. As a result,run_module_suite no longer works,i.e.,python<path-to-test-file> results in an error.
.astype(bool) on unstructured void arrays now callsbool on each element¶On Python 2,void_array.astype(bool) would always return an array ofTrue, unless the dtype isV0. On Python 3, this operation would usuallycrash. Going forwards,astype matches the behavior ofbool(np.void),considering a buffer of all zeros as false, and anything else as true.Checks forV0 can still be done witharr.dtype.itemsize==0.
MaskedArray.squeeze never returnsnp.ma.masked¶np.squeeze is documented as returning a view, but the masked variant wouldsometimes returnmasked, which is not a view. This has been fixed, so thatthe result is always a view on the original masked array.This breaks any code that usedmasked_arr.squeeze()isnp.ma.masked, butfixes code that writes to the result ofsqueeze().
can_cast fromfrom tofrom_¶The previous parameter namefrom is a reserved keyword in Python, which madeit difficult to pass the argument by name. This has been fixed by renamingthe parameter tofrom_.
isnat raisesTypeError when passed wrong type¶The ufuncisnat used to raise aValueError when it was not passedvariables of typedatetime ortimedelta. This has been changed toraising aTypeError.
dtype.__getitem__ raisesTypeError when passed wrong type¶When indexed with a float, the dtype object used to raiseValueError.
__str__ and__repr__¶Previously, user-defined types could fall back to a default implementation of__str__ and__repr__ implemented in numpy, but this has now beenremoved. Now user-defined types will fall back to the python defaultobject.__str__ andobject.__repr__.
Thestr andrepr of ndarrays and numpy scalars have been changed ina variety of ways. These changes are likely to break downstream user’sdoctests.
These new behaviors can be disabled to mostly reproduce numpy 1.13 behavior byenabling the new 1.13 “legacy” printing mode. This is enabled by callingnp.set_printoptions(legacy="1.13"), or using the newlegacy argument tonp.array2string, asnp.array2string(arr,legacy='1.13').
In summary, the major changes are:
repr of float arrays often omits a space previously printedin the sign position. See the newsign option tonp.set_printoptions.float16 fractional output, and sometimesfloat32 andfloat128 output.float64 should be unaffected. See the newfloatmode option tonp.set_printoptions.str of floating-point scalars is no longer truncated in python2.nanj instead ofnan*j.NaT values in datetime arrays are now properly aligned.np.void datatype are now printed using hexnotation.linewidth format option is now always respected.Therepr orstr of an array will never exceed this, unless a singleelement is too wide.... to shorten long arrays):str.Previously,str(np.arange(1001)) gave'[ 0 1 2..., 998 9991000]', which has an extra comma.... is printed on its own line inorder to summarize any but the last axis, newlines are now appended to thatline to match its leading newlines and a trailing space character isremoved.MaskedArray arrays now separate printed elements with commas, alwaysprint the dtype, and correctly wrap the elements of long arrays to multiplelines. If there is more than 1 dimension, the array attributes are nowprinted in a new “left-justified” printing style.recarray arrays no longer print a trailing space before their dtype, andwrap to the right number of columns.strandrepr. Thestyle argument tonp.array2string is deprecated.bool datatype will omit the datatype in therepr.dtypes (subclasses ofnp.generic) now need toimplement__str__ and__repr__.Some of these changes are described in more detail below. If you need to retainthe previous behavior for doctests or other reasons, you may want to dosomething like:
# FIXME: We need the str/repr formatting used in Numpy < 1.14.try:np.set_printoptions(legacy='1.13')exceptTypeError:pass
UPDATEIFCOPY arrays¶UPDATEIFCOPY arrays are contiguous copies of existing arrays, possibly withdifferent dimensions, whose contents are copied back to the original array whentheir refcount goes to zero and they are deallocated. Because PyPy does not userefcounts, they do not function correctly with PyPy. NumPy is in the process ofeliminating their use internally and two new C-API functions,
PyArray_SetWritebackIfCopyBasePyArray_ResolveWritebackIfCopy,have been added together with a complimentary flag,NPY_ARRAY_WRITEBACKIFCOPY. Using the new functionality also requires thatsome flags be changed when new arrays are created, to wit:NPY_ARRAY_INOUT_ARRAY should be replaced byNPY_ARRAY_INOUT_ARRAY2 andNPY_ARRAY_INOUT_FARRAY should be replaced byNPY_ARRAY_INOUT_FARRAY2.Arrays created with these new flags will then have theWRITEBACKIFCOPYsemantics.
If PyPy compatibility is not a concern, these new functions can be ignored,although there will be aDeprecationWarning. If you do wish to pursue PyPycompatibility, more information on these functions and their use may be foundin thec-api documentation and the example inhow-to-extend.
genfromtxt,loadtxt,fromregex andsavetxt can now handle fileswith arbitrary encoding supported by Python via the encoding argument.For backward compatibility the argument defaults to the specialbytes valuewhich continues to treat text as raw byte values and continues to pass latin1encoded bytes to custom converters.Using any other value (includingNone for system default) will switch thefunctions to real text IO so one receives unicode strings instead of bytes inthe resulting arrays.
nose plugins are usable bynumpy.testing.Tester¶numpy.testing.Tester is now aware ofnose plugins that are outside thenose built-in ones. This allows using, for example,nose-timer likeso:np.test(extra_argv=['--with-timer','--timer-top-n','20']) toobtain the runtime of the 20 slowest tests. An extra keywordtimer wasalso added toTester.test, sonp.test(timer=20) will also report the 20slowest tests.
parametrize decorator added tonumpy.testing¶A basicparametrize decorator is now available innumpy.testing. It isintended to allow rewriting yield based tests that have been deprecated inpytest so as to facilitate the transition to pytest in the future. The nosetesting framework has not been supported for several years and looks likeabandonware.
The newparametrize decorator does not have the full functionality of theone in pytest. It doesn’t work for classes, doesn’t support nesting, and doesnot substitute variable names. Even so, it should be adequate to rewrite theNumPy tests.
chebinterpolate function added tonumpy.polynomial.chebyshev¶The newchebinterpolate function interpolates a given function at theChebyshev points of the first kind. A newChebyshev.interpolate classmethod adds support for interpolation over arbitrary intervals using the scaledand shifted Chebyshev points of the first kind.
With Python versions containing thelzma module the text IO functions cannow transparently read from files withxz orlzma extension.
sign option added tonp.setprintoptions andnp.array2string¶This option controls printing of the sign of floating-point types, and may beone of the characters ‘-‘, ‘+’ or ‘ ‘. With ‘+’ numpy always prints the sign ofpositive values, with ‘ ‘ it always prints a space (whitespace character) inthe sign position of positive values, and with ‘-‘ it will omit the signcharacter for positive values. The new default is ‘-‘.
This new default changes the float output relative to numpy 1.13. The oldbehavior can be obtained in 1.13 “legacy” printing mode, see compatibilitynotes above.
hermitian option added to``np.linalg.matrix_rank``¶The newhermitian option allows choosing between standard SVD based matrixrank calculation and the more efficient eigenvalue based method forsymmetric/hermitian matrices.
threshold andedgeitems options added tonp.array2string¶These options could previously be controlled usingnp.set_printoptions, butnow can be changed on a per-call basis as arguments tonp.array2string.
concatenate andstack gained anout argument¶A preallocated buffer of the desired dtype can now be used for the output ofthese functions.
The PGI flang compiler is a Fortran front end for LLVM released by NVIDIA underthe Apache 2 license. It can be invoked by
pythonsetup.pyconfig--compiler=clang--fcompiler=flanginstall
There is little experience with this new compiler, so any feedback from peopleusing it will be appreciated.
random.noncentral_f need only be positive.¶Prior to NumPy 1.14.0, the numerator degrees of freedom needed to be > 1, butthe distribution is valid for values > 0, which is the new requirement.
np.einsum variations¶Some specific loop structures which have an accelerated loop versiondid not release the GIL prior to NumPy 1.14.0. This oversight has beenfixed.
Thenp.einsum function will now callnp.tensordot when appropriate.Becausenp.tensordot uses BLAS when possible, that will speed up execution.By default,np.einsum will also attempt optimization as the overhead issmall relative to the potential improvement in speed.
f2py now handles arrays of dimension 0¶f2py now allows for the allocation of arrays of dimension 0. This allowsfor more consistent handling of corner cases downstream.
numpy.distutils supports using MSVC and mingw64-gfortran together¶Numpy distutils now supports using Mingw64 gfortran and MSVC compilerstogether. This enables the production of Python extension modules on Windowscontaining Fortran code while retaining compatibility with thebinaries distributed by Python.org. Not all use cases are supported,but most common ways to wrap Fortran for Python are functional.
Compilation in this mode is usually enabled automatically, and can beselected via the--fcompiler and--compiler options tosetup.py. Moreover, linking Fortran codes to static OpenBLAS issupported; by default a gfortran compatible static archiveopenblas.a is looked for.
np.linalg.pinv now works on stacked matrices¶Previously it was limited to a single 2d array.
numpy.save aligns data to 64 bytes instead of 16¶Saving NumPy arrays in thenpy format withnumpy.save insertspadding before the array data to align it at 64 bytes. Previouslythis was only 16 bytes (and sometimes less due to a bug in the codefor version 2). Now the alignment is 64 bytes, which matches thewidest SIMD instruction set commonly available, and is also the mostcommon cache line size. This makesnpy files easier to use inprograms which open them withmmap, especially on Linux where anmmap offset must be a multiple of the page size.
In Python 3.6+numpy.savez andnumpy.savez_compressed now writedirectly to a ZIP file, without creating intermediate temporary files.
Structured types can contain zero fields, and string dtypes can contain zerocharacters. Zero-length strings still cannot be created directly, and must beconstructed through structured dtypes:
str0=np.empty(10,np.dtype([('v',str,N)]))['v']void0=np.empty(10,np.void)
It was always possible to work with these, but the following operations arenow supported for these arrays:
- arr.sort()
- arr.view(bytes)
- arr.resize(…)
- pickle.dumps(arr)
decimal.Decimal innp.lib.financial¶Unless otherwise stated all functions within thefinancial package nowsupport using thedecimal.Decimal built-in type.
Thestr andrepr of floating-point values (16, 32, 64 and 128 bit) arenow printed to give the shortest decimal representation which uniquelyidentifies the value from others of the same type. Previously this was onlytrue forfloat64 values. The remaining float types will now often be shorterthan in numpy 1.13. Arrays printed in scientific notation now also use theshortest scientific representation, instead of fixed precision as before.
Additionally, thestr of float scalars scalars will no longer be truncatedin python2, unlike python2float`s. `np.double scalars now have astrandrepridentical to that of a python3 float.
New functionsnp.format_float_scientific andnp.format_float_positionalare provided to generate these decimal representations.
A new optionfloatmode has been added tonp.set_printoptions andnp.array2string, which gives control over uniqueness and rounding ofprinted elements in an array. The new default isfloatmode='maxprec' withprecision=8, which will print at most 8 fractional digits, or fewer if anelement can be uniquely represented with fewer. A useful new mode isfloatmode="unique", which will output enough digits to specify the arrayelements uniquely.
Numpy complex-floating-scalars with values likeinf*j ornan*j nowprint asinfj andnanj, like the pure-pythoncomplex type.
TheFloatFormat andLongFloatFormat classes are deprecated and shouldboth be replaced byFloatingFormat. SimilarlyComplexFormat andLongComplexFormat should be replaced byComplexFloatingFormat.
void datatype elements are now printed in hex notation¶A hex representation compatible with the pythonbytes type is now printedfor unstructurednp.void elements, e.g.,V4 datatype. Previously, inpython2 the raw void data of the element was printed to stdout, or in python3the integer byte values were shown.
void datatypes is now independently customizable¶The printing style ofnp.void arrays is now independently customizableusing theformatter argument tonp.set_printoptions, using the'void' key, instead of the catch-allnumpystr key as before.
np.loadtxt¶np.loadtxt now reads files in chunks instead of all at once which decreasesits memory usage significantly for large files.
The indexing and assignment of structured arrays with multiple fields haschanged in a number of ways, as warned about in previous releases.
First, indexing a structured array with multiple fields, e.g.,arr[['f1','f3']], returns a view into the original array instead of acopy. The returned view will have extra padding bytes corresponding tointervening fields in the original array, unlike the copy in 1.13, which willaffect code such asarr[['f1','f3']].view(newdtype).
Second, assignment between structured arrays will now occur “by position”instead of “by field name”. The Nth field of the destination will be set to theNth field of the source regardless of field name, unlike in numpy versions 1.6to 1.13 in which fields in the destination array were set to theidentically-named field in the source array or to 0 if the source did not havea field.
Correspondingly, the order of fields in a structured dtypes now matters whencomputing dtype equality. For example, with the dtypes
x=dtype({'names':['A','B'],'formats':['i4','f4'],'offsets':[0,4]})y=dtype({'names':['B','A'],'formats':['f4','i4'],'offsets':[4,0]})
the expressionx==y will now returnFalse, unlike before.This makes dictionary based dtype specifications likedtype({'a':('i4',0),'b':('f4',4)}) dangerous in python < 3.6since dict key order is not preserved in those versions.
Assignment from a structured array to a boolean array now raises a ValueError,unlike in 1.13, where it always set the destination elements toTrue.
Assignment from structured array with more than one field to a non-structuredarray now raises a ValueError. In 1.13 this copied just the first field of thesource to the destination.
Using field “titles” in multiple-field indexing is now disallowed, as isrepeating a field name in a multiple-field index.
The documentation for structured arrays in the user guide has beensignificantly updated to reflect these changes.
np.set_string_function¶Previously, unlike most other numpy scalars, thestr andrepr ofinteger and void scalars could be controlled bynp.set_string_function.This is no longer possible.
style arg of array2string deprecated¶Previously thestr andrepr of 0d arrays had idiosyncraticimplementations which returnedstr(a.item()) and'array('+repr(a.item())+')' respectively for 0d arraya, unlike both numpyscalars and higher dimension ndarrays.
Now, thestr of a 0d array acts like a numpy scalar usingstr(a[()])and therepr acts like higher dimension arrays usingformatter(a[()]),whereformatter can be specified usingnp.set_printoptions. Thestyle argument ofnp.array2string is deprecated.
This new behavior is disabled in 1.13 legacy printing mode, see compatibilitynotes above.
RandomState using an array requires a 1-d array¶RandomState previously would accept empty arrays or arrays with 2 or moredimensions, which resulted in either a failure to seed (empty arrays) or forsome of the passed values to be ignored when setting the seed.
MaskedArray objects show a more usefulrepr¶Therepr of aMaskedArray is now closer to the python code that wouldproduce it, with arrays now being shown with commas and dtypes. Like the otherformatting changes, this can be disabled with the 1.13 legacy printing mode inorder to help transition doctests.
repr ofnp.polynomial classes is more explicit¶It now shows the domain and window parameters as keyword arguments to makethem more clear:
>>>np.polynomial.Polynomial(range(4))Polynomial([0., 1., 2., 3.], domain=[-1, 1], window=[-1, 1])
This is a bugfix release for some problems found since 1.13.1. The mostimportant fixes are for CVE-2017-12852 and temporary elision. Users of earlierversions of 1.13 should upgrade.
The Python versions supported are 2.7 and 3.4 - 3.6. The Python 3.6 wheelsavailable from PIP are built with Python 3.6.2 and should be compatible withall previous versions of Python 3.6. It was cythonized with Cython 0.26.1,which should be free of the bugs found in 0.27 while also being compatible withPython 3.7-dev. The Windows wheels were built with OpenBlas instead ATLAS,which should improve the performance of the linear algebra functions.
The NumPy 1.13.3 release is a re-release of 1.13.2, which suffered from abug in Cython 0.27.0.
A total of 12 people contributed to this release. People with a “+” by theirnames contributed a patch for the first time.
A total of 22 pull requests were merged for this release.
This is a bugfix release for some problems found since 1.13.1. The mostimportant fixes are for CVE-2017-12852 and temporary elision. Users of earlierversions of 1.13 should upgrade.
The Python versions supported are 2.7 and 3.4 - 3.6. The Python 3.6 wheelsavailable from PIP are built with Python 3.6.2 and should be compatible withall previous versions of Python 3.6. The Windows wheels are now builtwith OpenBlas instead ATLAS, which should improve the performance of the linearalgebra functions.
A total of 12 people contributed to this release. People with a “+” by theirnames contributed a patch for the first time.
A total of 20 pull requests were merged for this release.
This is a bugfix release for problems found in 1.13.0. The major changes arefixes for the new memory overlap detection and temporary elision as well asreversion of the removal of the boolean binary- operator. Users of 1.13.0should upgrade.
Thr Python versions supported are 2.7 and 3.4 - 3.6. Note that the Python 3.6wheels available from PIP are built against 3.6.1, hence will not work whenused with 3.6.0 due to Python bug29943_. NumPy 1.13.2 will be released shortlyafter Python 3.6.2 is out to fix that problem. If you are using 3.6.0 theworkaround is to upgrade to 3.6.1 or use an earlier Python version.
A total of 19 pull requests were merged for this release.
A total of 12 people contributed to this release. People with a “+” by theirnames contributed a patch for the first time.
This release supports Python 2.7 and 3.4 - 3.6.
- Operations like
a+b+cwill reuse temporaries on some platforms,resulting in less memory use and faster execution.- Inplace operations check if inputs overlap outputs and create temporariesto avoid problems.
- New
__array_ufunc__attribute provides improved ability for classes tooverride default ufunc behavior.- New
np.blockfunction for creating blocked arrays.
np.positive ufunc.np.divmod ufunc provides more efficient divmod.np.isnat ufunc tests for NaT special values.np.heaviside ufunc computes the Heaviside function.np.isin function, improves onin1d.np.block function for creating blocked arrays.PyArray_MapIterArrayCopyIfOverlap added to NumPy C-API.See below for details.
np.fix,np.isposinf, andnp.isneginf withf(x,y=out)is deprecated - the argument should be passed asf(x,out=out), whichmatches other ufunc-like interfaces.NPY_CHAR type number deprecated since version 1.7 willnow raise deprecation warnings at runtime. Extensions built with older f2pyversions need to be recompiled to remove the warning.np.ma.argsort,np.ma.minimum.reduce, andnp.ma.maximum.reduceshould be called with an explicitaxis argument when applied to arrays withmore than 2 dimensions, as the default value of this argument (None) isinconsistent with the rest of numpy (-1,0, and0, respectively).np.ma.MaskedArray.mini is deprecated, as it almost duplicates thefunctionality ofnp.MaskedArray.min. Exactly equivalent behaviourcan be obtained withnp.ma.minimum.reduce.np.ma.minimum andnp.ma.maximum isdeprecated.np.maximum.np.ma.minimum(x) should now be speltnp.ma.minimum.reduce(x), which is consistent with how this would be donewithnp.minimum.ndarray.conjugate on non-numeric dtypes is deprecated (itshould match the behavior ofnp.conjugate, which throws an error).expand_dims when theaxis keyword does not satisfy-a.ndim-1<=axis<=a.ndim, wherea is the array being reshaped,is deprecated.FutureWarning raised in NumPy 1.12incorrectly reported this change as scheduled for NumPy 1.13 rather thanNumPy 1.14.numpy.distutils now automatically determines C-file dependencies withGCC compatible compilers.numpy.hstack() now throwsValueError instead ofIndexError wheninput is empty.np.AxisError instead of a mixture ofIndexError andValueError. For backwards compatibility,AxisError subclasses both ofthese.Support has been removed for certain obscure dtypes that were unintentionallyallowed, of the form(old_dtype,new_dtype), where either of the dtypesis or contains theobject dtype. As an exception, dtypes of the form(object,[('name',object)]) are still supported due to evidence ofexisting use.
See Changes section for more detail.
partition, TypeError when non-integer partition index is used.NpyIter_AdvancedNew, ValueError whenoa_ndim==0 andop_axes is NULLnegative(bool_), TypeError when negative applied to booleans.subtract(bool_,bool_), TypeError when subtracting boolean from boolean.np.equal,np.not_equal, object identity doesn’t override failed comparison.np.equal,np.not_equal, object identity doesn’t override non-boolean comparison.np.alterdot() andnp.restoredot() removed.See Changes section for more detail.
numpy.average preserves subclassesarray==None andarray!=None do element-wise comparison.np.equal,np.not_equal, object identity doesn’t override comparison result.Previouslybool(dtype) would fall back to the default pythonimplementation, which checked iflen(dtype)>0. Sincedtype objectsimplement__len__ as the number of record fields,bool of scalar dtypeswould evaluate toFalse, which was unintuitive. Nowbool(dtype)==Truefor all dtypes.
__getslice__ and__setslice__ are no longer needed inndarray subclasses¶When subclassing np.ndarray in Python 2.7, it is no longer _necessary_ toimplement__*slice__ on the derived class, as__*item__ will interceptthese calls correctly.
Any code that did implement these will work exactly as before. Code thatinvokes``ndarray.__getslice__`` (e.g. throughsuper(...).__getslice__) willnow issue a DeprecationWarning -.__getitem__(slice(start,end)) should beused instead.
... (ellipsis) now returns MaskedArray¶This behavior mirrors that of np.ndarray, and accounts for nested arrays inMaskedArrays of object dtype, and ellipsis combined with other forms ofindexing.
It is now allowed to remove a zero-sized axis from NpyIter. Which may meanthat code removing axes from NpyIter has to add an additional check whenaccessing the removed dimensions later on.
The largest followup change is that gufuncs are now allowed to have zero-sizedinner dimensions. This means that a gufunc now has to anticipate an empty innerdimension, while this was never possible and an error raised instead.
For most gufuncs no change should be necessary. However, it is now possiblefor gufuncs with a signature such as(...,N,M)->(...,M) to returna valid result ifN=0 without further wrapping code.
PyArray_MapIterArrayCopyIfOverlap added to NumPy C-API¶Similar toPyArray_MapIterArray but with an additionalcopy_if_overlapargument. Ifcopy_if_overlap!=0, checks if input has memory overlap withany of the other arrays and make copies as appropriate to avoid problems if theinput is modified during the iteration. See the documentation for more completedocumentation.
__array_ufunc__ added¶This is the renamed and redesigned__numpy_ufunc__. Any class, ndarraysubclass or not, can define this method or set it toNone in order tooverride the behavior of NumPy’s ufuncs. This works quite similarly to Python’s__mul__ and other binary operation routines. See the documentation for amore detailed description of the implementation and behavior of this newoption. The API is provisional, we do not yet guarantee backward compatibilityas modifications may be made pending feedback. See theNEP_ anddocumentation for more details.
positive ufunc¶This ufunc corresponds to unary+, but unlike+ on an ndarray it will raisean error if array values do not support numeric operations.
divmod ufunc¶This ufunc corresponds to the Python builtindivmod, and is used to implementdivmod when called on numpy arrays.np.divmod(x,y) calculates a resultequivalent to(np.floor_divide(x,y),np.remainder(x,y)) but isapproximately twice as fast as calling the functions separately.
np.isnat ufunc tests for NaT special datetime and timedelta values¶The new ufuncnp.isnat finds the positions of special NaT valueswithin datetime and timedelta arrays. This is analogous tonp.isnan.
np.heaviside ufunc computes the Heaviside function¶The new functionnp.heaviside(x,h0) (a ufunc) computes the Heavisidefunction:
{0ifx<0,heaviside(x,h0)={h0ifx==0,{1ifx>0.
np.block function for creating blocked arrays¶Add a newblock function to the current stacking functionsvstack,hstack, andstack. This allows concatenation across multiple axessimultaneously, with a similar syntax to array creation, but where elementscan themselves be arrays. For instance:
>>>A=np.eye(2)*2>>>B=np.eye(3)*3>>>np.block([...[A,np.zeros((2,3))],...[np.ones((3,2)),B]...])array([[ 2., 0., 0., 0., 0.], [ 0., 2., 0., 0., 0.], [ 1., 1., 3., 0., 0.], [ 1., 1., 0., 3., 0.], [ 1., 1., 0., 0., 3.]])
While primarily useful for block matrices, this works for arbitrary dimensionsof arrays.
It is similar to Matlab’s square bracket notation for creating block matrices.
isin function, improving onin1d¶The new functionisin tests whether each element of an N-dimensonalarray is present anywhere within a second array. It is an enhancementofin1d that preserves the shape of the first array.
On platforms providing thebacktrace function NumPy will try to avoidcreating temporaries in expression involving basic numeric types.For exampled=a+b+c is transformed tod=a+b;d+=c which canimprove performance for large arrays as less memory bandwidth is required toperform the operation.
axes argument forunique¶In an N-dimensional array, the user can now choose the axis along which to lookfor duplicate N-1-dimensional elements usingnumpy.unique. The originalbehaviour is recovered ifaxis=None (default).
np.gradient now supports unevenly spaced data¶Users can now specify a not-constant spacing for data.In particularnp.gradient can now take:
dx,dy,dz, …This means that, e.g., it is now possible to do the following:
>>>f=np.array([[1,2,6],[3,4,5]],dtype=np.float)>>>dx=2.>>>y=[1.,1.5,3.5]>>>np.gradient(f,dx,y)[array([[ 1. , 1. , -0.5], [ 1. , 1. , -0.5]]), array([[ 2. , 2. , 2. ], [ 2. , 1.7, 0.5]])]
apply_along_axis¶Previously, only scalars or 1D arrays could be returned by the function passedtoapply_along_axis. Now, it can return an array of any dimensionality(including 0D), and the shape of this array replaces the axis of the arraybeing iterated over.
.ndim property added todtype to complement.shape¶For consistency withndarray andbroadcast,d.ndim is a shorthandforlen(d.shape).
NumPy now supports memory tracing withtracemalloc module of Python 3.6 ornewer. Memory allocations from NumPy are placed into the domain defined bynumpy.lib.tracemalloc_domain.Note that NumPy allocation will not show up intracemalloc of earlier Pythonversions.
Setting NPY_RELAXED_STRIDES_DEBUG=1 in the environment when relaxed stridechecking is enabled will cause NumPy to be compiled with the affected stridesset to the maximum value of npy_intp in order to help detect invalid usage ofthe strides in downstream projects. When enabled, invalid usage often resultsin an error being raised, but the exact type of error depends on the details ofthe code. TypeError and OverflowError have been observed in the wild.
It was previously the case that this option was disabled for releases andenabled in master and changing between the two required editing the code. It isnow disabled by default but can be enabled for test builds.
Operations where ufunc input and output operands have memory overlapproduced undefined results in previous NumPy versions, due to datadependency issues. In NumPy 1.13.0, results from such operations arenow defined to be the same as for equivalent operations where there isno memory overlap.
Operations affected now make temporary copies, as needed to eliminatedata dependency. As detecting these cases is computationallyexpensive, a heuristic is used, which may in rare cases result toneedless temporary copies. For operations where the data dependencyis simple enough for the heuristic to analyze, temporary copies willnot be made even if the arrays overlap, if it can be deduced copiesare not necessary. As an example,``np.add(a, b, out=a)`` will notinvolve copies.
To illustrate a previously undefined operation:
>>>x=np.arange(16).astype(float)>>>np.add(x[1:],x[:-1],out=x[1:])
In NumPy 1.13.0 the last line is guaranteed to be equivalent to:
>>>np.add(x[1:].copy(),x[:-1].copy(),out=x[1:])
A similar operation with simple non-problematic data dependence is:
>>>x=np.arange(16).astype(float)>>>np.add(x[1:],x[:-1],out=x[:-1])
It will continue to produce the same results as in previous NumPyversions, and will not involve unnecessary temporary copies.
The change applies also to in-place binary operations, for example:
>>>x=np.random.rand(500,500)>>>x+=x.T
This statement is now guaranteed to be equivalent tox[...]=x+x.T,whereas in previous NumPy versions the results were undefined.
Extensions that incorporate Fortran libraries can now be built using the freeMinGW toolset, also under Python 3.5. This works best for extensions that onlydo calculations and uses the runtime modestly (reading and writing from files,for instance). Note that this does not remove the need for Mingwpy; if you makeextensive use of the runtime, you will most likely run intoissues. Instead,it should be regarded as a band-aid until Mingwpy is fully functional.
Extensions can also be compiled using the MinGW toolset using the runtimelibrary from the (moveable) WinPython 3.4 distribution, which can be useful forprograms with a PySide1/Qt4 front-end.
packbits andunpackbits¶The functionsnumpy.packbits with boolean input andnumpy.unpackbits havebeen optimized to be a significantly faster for contiguous data.
In previous versions of NumPy, thefinfo function returned invalidinformation about thedouble double format of thelongdouble float typeon Power PC (PPC). The invalid values resulted from the failure of the NumPyalgorithm to deal with the variable number of digits in the significandthat are a feature ofPPC long doubles. This release by-passes the failingalgorithm by using heuristics to detect the presence of the PPC double doubleformat. A side-effect of using these heuristics is that thefinfofunction is faster than previous releases.
ndarray subclasses¶Subclasses of ndarray with norepr specialization now correctly indenttheir data and type lines.
Comparisons of masked arrays were buggy for masked scalars and failed forstructured arrays with dimension higher than one. Both problems are nowsolved. In the process, it was ensured that in getting the result for astructured array, masked fields are properly ignored, i.e., the result is equalif all fields that are non-masked in both are equal, thus making the behaviouridentical to what one gets by comparing an unstructured masked array and thendoing.all() over some axis.
np.matrix failed whenever one attempts to use it with booleans, e.g.,np.matrix('True'). Now, this works as expected.
linalg operations now accept empty vectors and matrices¶All of the following functions innp.linalg now work when given inputarrays with a 0 in the last two dimensions:det,slogdet,pinv,eigvals,eigvalsh,eig,eigh.
NumPy comes bundled with a minimal implementation of lapack for systems withouta lapack library installed, under the name oflapack_lite. This has beenupgraded from LAPACK 3.0.0 (June 30, 1999) to LAPACK 3.2.2 (June 30, 2010). SeetheLAPACK changelogs for details on the all the changes this entails.
While no new features are exposed throughnumpy, this fixes some bugsregarding “workspace” sizes, and in some places may use faster algorithms.
reduce ofnp.hypot.reduce andnp.logical_xor allowed in more cases¶This now works on empty arrays, returning 0, and can reduce over multiple axes.Previously, aValueError was thrown in these cases.
repr of object arrays¶Object arrays that contain themselves no longer cause a recursion error.
Object arrays that containlist objects are now printed in a way that makesclear the difference between a 2d object array, and a 1d object array of lists.
argsort on masked arrays takes the same default arguments assort¶By default,argsort now places the masked values at the end of the sortedarray, in the same way thatsort already did. Additionally, theend_with argument is added toargsort, for consistency withsort.Note that this argument is not added at the end, so breaks any code thatpassedfill_value as a positional argument.
average now preserves subclasses¶For ndarray subclasses,numpy.average will now return an instance of thesubclass, matching the behavior of most other NumPy functions such asmean.As a consequence, also calls that returned a scalar may now return a subclassarray scalar.
array==None andarray!=None do element-wise comparison¶Previously these operations returned scalarsFalse andTrue respectively.
np.equal,np.not_equal for object arrays ignores object identity¶Previously, these functions always treated identical objects as equal. This hadthe effect of overriding comparison failures, comparison of objects that didnot return booleans, such as np.arrays, and comparison of objects where theresults differed from object identity, such as NaNs.
True) are legal boolean indexes andnever treated as integers.array(1)[array(True)] givesarray([1]) and not the original array.np.random.multivariate_normal behavior with bad covariance matrix¶It is now possible to adjust the behavior the function will have when dealingwith the covariance matrix by using two new keyword arguments:
tol can be used to specify a tolerance to use when checking thatthe covariance matrix is positive semidefinite.check_valid can be used to configure what the function will do in thepresence of a matrix that is not positive semidefinite. Valid options areignore,warn andraise. The default value,warn keeps thethe behavior used on previous releases.assert_array_less comparesnp.inf and-np.inf now¶Previously,np.testing.assert_array_less ignored all infinite values. Thisis not the expected behavior both according to documentation and intuitively.Now, -inf < x < inf is consideredTrue for any real number x and allother cases fail.
assert_array_ and masked arraysassert_equal hide less warnings¶Some warnings that were previously hidden by theassert_array_functions are not hidden anymore. In most cases the warnings should becorrect and, should they occur, will require changes to the tests usingthese functions.For the masked arrayassert_equal version, warnings may occur whencomparing NaT. The function presently does not handle NaT or NaNspecifically and it may be best to avoid it at this time should a warningshow up due to this change.
offset attribute value inmemmap objects¶Theoffset attribute in amemmap object is now set to theoffset into the file. This is a behaviour change only for offsetsgreater thanmmap.ALLOCATIONGRANULARITY.
np.real andnp.imag return scalars for scalar inputs¶Previously,np.real andnp.imag used to return array objects whenprovided a scalar input, which was inconsistent with other functions likenp.angle andnp.conj.
The ABCPolyBase class, from which the convenience classes are derived, sets__array_ufun__=None in order of opt out of ufuncs. If a polynomialconvenience class instance is passed as an argument to a ufunc, aTypeErrorwill now be raised.
For calls to ufuncs, it was already possible, and recommended, to use anout argument with a tuple for ufuncs with multiple outputs. This has nowbeen extended to output arguments in thereduce,accumulate, andreduceat methods. This is mostly for compatibility with__array_ufunc;there are no ufuncs yet that have more than one output.
NumPy 1.12.1 supports Python 2.7 and 3.4 - 3.6 and fixes bugs and regressionsfound in NumPy 1.12.0. In particular, the regression in f2py constant parsingis fixed. Wheels for Linux, Windows, and OSX can be found on pypi,
This release supports Python 2.7 and 3.4 - 3.6.
The NumPy 1.12.0 release contains a large number of fixes and improvements, butfew that stand out above all others. That makes picking out the highlightssomewhat arbitrary but the following may be of particular interest or indicateareas likely to have future consequences.
np.einsum can now be optimized for large speed improvements.signature argument tonp.vectorize for vectorizing with core dimensions.keepdims argument was added to many functions.updateifcopy is not supported yet), this is a milestone for PyPy’sC-API compatibility layer.data attribute¶Assigning the ‘data’ attribute is an inherently unsafe operation as pointedout in gh-7083. Such a capability will be removed in the future.
linspace¶np.linspace now raises DeprecationWarning when num cannot be safelyinterpreted as an integer.
binary_repr¶If a ‘width’ parameter is passed intobinary_repr that is insufficient torepresent the number in base 2 (positive) or 2’s complement (negative) form,the function used to silently ignore the parameter and return a representationusing the minimal number of bits needed for the form in question. Such behavioris now considered unsafe from a user perspective and will raise an error in thefuture.
NAT!=NAT,which will be True. In short, NAT will behave like NaNnp.average will preserve subclasses, to match the behavior of mostother numpy functions such as np.mean. In particular, this means calls whichreturned a scalar may return a 0-d subclass object instead.In 1.13 the behavior of structured arrays involving multiple fields will changein two ways:
First, indexing a structured array with multiple fields (eg,arr[['f1','f3']]) will return a view into the original array in 1.13,instead of a copy. Note the returned view will have extra padding bytescorresponding to intervening fields in the original array, unlike the copy in1.12, which will affect code such asarr[['f1','f3']].view(newdtype).
Second, for numpy versions 1.6 to 1.12 assignment between structured arraysoccurs “by field name”: Fields in the destination array are set to theidentically-named field in the source array or to 0 if the source does not havea field:
>>>a=np.array([(1,2),(3,4)],dtype=[('x','i4'),('y','i4')])>>>b=np.ones(2,dtype=[('z','i4'),('y','i4'),('x','i4')])>>>b[:]=a>>>barray([(0, 2, 1), (0, 4, 3)], dtype=[('z', '<i4'), ('y', '<i4'), ('x', '<i4')])
In 1.13 assignment will instead occur “by position”: The Nth field of thedestination will be set to the Nth field of the source regardless of fieldname. The old behavior can be obtained by using indexing to reorder the fieldsbeforeassignment, e.g.,b[['x','y']]=a[['y','x']].
IndexError,e.g., a[0, 0.0].IndexError,e.g.,a['1','2']IndexError,e.g.,a[...,...].TypeError,e.g., inreshape,take, and specifying reduce axis.np.full now returns an array of the fill-value’s dtype if no dtype isgiven, instead of defaulting to float.np.average will emit a warning if the argument is a subclass of ndarray,as the subclass will be preserved starting in 1.13. (see Future Changes)power and** raise errors for integer to negative integer powers¶The previous behavior depended on whether numpy scalar integers or numpyinteger arrays were involved.
For arrays
For scalars
All of these cases now raise aValueError except for those integercombinations whose common type is float, for instance uint64 and int8. It wasfelt that a simple rule was the best way to go rather than have specialexceptions for the integer units. If you need negative powers, use an inexacttype.
This will have some impact on code that assumed thatF_CONTIGUOUS andC_CONTIGUOUS were mutually exclusive and could be set to determine thedefault order for arrays that are now both.
np.percentile ‘midpoint’ interpolation method fixed for exact indices¶The ‘midpoint’ interpolator now gives the same result as ‘lower’ and ‘higher’ whenthe two coincide. Previous behavior of ‘lower’ + 0.5 is fixed.
keepdims kwarg is passed through to user-class methods¶numpy functions that take akeepdims kwarg now pass the valuethrough to the corresponding methods on ndarray sub-classes. Previously thekeepdims keyword would be silently dropped. These functions now havethe following behavior:
keepdims, no keyword is passed to the underlyingmethod.keepdims is passed through as a keywordargument to the method.This will raise in the case where the method does not support akeepdims kwarg and the user explicitly passes inkeepdims.
The following functions are changed:sum,product,sometrue,alltrue,any,all,amax,amin,prod,mean,std,var,nanmin,nanmax,nansum,nanprod,nanmean,nanmedian,nanvar,nanstd
bitwise_and identity changed¶The previous identity was 1, it is now -1. See entry in Improvements formore explanation.
Similar to unmasked median the masked medianma.median now emits a Runtimewarning and returnsNaN in slices where an unmaskedNaN is present.
assert_almost_equal¶The precision check for scalars has been changed to match that for arrays. Itis now:
abs(actual-desired)<1.5*10**(-decimal)
Note that this is looser than previously documented, but agrees with theprevious implementation used inassert_array_almost_equal. Due to thechange in implementation some very delicate tests may fail that did notfail before.
NoseTester behaviour of warnings during testing¶Whenraise_warnings="develop" is given, all uncaught warnings will nowbe considered a test failure. Previously only selected ones were raised.Warnings which are not caught or raised (mostly when in release mode)will be shown once during the test cycle similar to the default pythonsettings.
assert_warns anddeprecated decorator more specific¶Theassert_warns function and context manager are now more specificto the given warning category. This increased specificity leads to thembeing handled according to the outer warning settings. This means thatno warning may be raised in cases where a wrong category warning is givenand ignored outside the context. Alternatively the increased specificitymay mean that warnings that were incorrectly ignored will now be shownor raised. See also the newsuppress_warnings context manager.The same is true for thedeprecated decorator.
No changes.
as_strided¶np.lib.stride_tricks.as_strided now has awriteablekeyword argument. It can be set to False when no write operationto the returned array is expected to avoid accidentalunpredictable writes.
axes keyword argument forrot90¶Theaxes keyword argument inrot90 determines the plane in which thearray is rotated. It defaults toaxes=(0,1) as in the original function.
flip¶flipud andfliplr reverse the elements of an array along axis=0 andaxis=1 respectively. The newly addedflip function reverses the elements ofan array along any given axis.
np.count_nonzero now has anaxis parameter, allowingnon-zero counts to be generated on more than just a flattenedarray object.numpy.distutils¶Building against the BLAS implementation provided by the BLIS library is nowsupported. See the[blis] section insite.cfg.example (in the root ofthe numpy repo or source distribution).
numpy/__init__.py to run distribution-specific checks¶Binary distributions of numpy may need to run specific hardware checks or loadspecific libraries during numpy initialization. For example, if we aredistributing numpy with a BLAS library that requires SSE2 instructions, wewould like to check the machine on which numpy is running does have SSE2 inorder to give an informative error.
Add a hook innumpy/__init__.py to import anumpy/_distributor_init.pyfile that will remain empty (bar a docstring) in the standard numpy source,but that can be overwritten by people making binary distributions of numpy.
nancumsum andnancumprod added¶Nan-functionsnancumsum andnancumprod have been added tocomputecumsum andcumprod by ignoring nans.
np.interp can now interpolate complex values¶np.lib.interp(x,xp,fp) now allows the interpolated arrayfpto be complex and will interpolate atcomplex128 precision.
polyvalfromroots added¶The new functionpolyvalfromroots evaluates a polynomial at given pointsfrom the roots of the polynomial. This is useful for higher order polynomials,where expansion into polynomial coefficients is inaccurate at machineprecision.
geomspace added¶The new functiongeomspace generates a geometric sequence. It is similartologspace, but with start and stop specified directly:geomspace(start,stop) behaves the same aslogspace(log10(start),log10(stop)).
A new context managersuppress_warnings has been added to the testingutils. This context manager is designed to help reliably test warnings.Specifically to reliably filter/ignore warnings. Ignoring warningsby using an “ignore” filter in Python versions before 3.4.x can quicklyresult in these (or similar) warnings not being tested reliably.
The context manager allows to filter (as well as record) warnings similarto thecatch_warnings context, but allows for easier specificity.Also printing warnings that have not been filtered or nesting thecontext manager will work as expected. Additionally, it is possibleto use the context manager as a decorator which can be useful whenmultiple tests give need to hide the same warning.
ma.convolve andma.correlate added¶These functions wrapped the non-masked versions, but propagate through maskedvalues. There are two different propagation modes. The default causes maskedvalues to contaminate the result with masks, but the other mode only outputsmasks if there is no alternative.
float_power ufunc¶The newfloat_power ufunc is like thepower function except allcomputation is done in a minimum precision of float64. There was a longdiscussion on the numpy mailing list of how to treat integers to negativeinteger powers and a popular proposal was that the__pow__ operator shouldalways return results of at least float64 precision. Thefloat_powerfunction implements that option. Note that it does not support object arrays.
np.loadtxt now supports a single integer asusecol argument¶Instead of usingusecol=(n,) to read the nth column of a fileit is now allowed to useusecol=n. Also the error message ismore user friendly when a non-integer is passed as a column index.
histogram¶Added ‘doane’ and ‘sqrt’ estimators tohistogram via thebinsargument. Added support for range-restricted histograms with automatedbin estimation.
np.roll can now roll multiple axes at the same time¶Theshift andaxis arguments toroll are now broadcast against eachother, and each specified axis is shifted accordingly.
__complex__ method has been implemented for the ndarrays¶Callingcomplex() on a size 1 array will now cast to a pythoncomplex.
pathlib.Path objects now supported¶The standardnp.load,np.save,np.loadtxt,np.savez, and similarfunctions can now takepathlib.Path objects as an argument instead of afilename or open file object.
bits attribute fornp.finfo¶This makesnp.finfo consistent withnp.iinfo which already has thatattribute.
signature argument tonp.vectorize¶This argument allows for vectorizing user defined functions with coredimensions, in the style of NumPy’sgeneralized universal functions. This allowsfor vectorizing a much broader class of functions. For example, an arbitrarydistance metric that combines two vectors to produce a scalar could bevectorized withsignature='(n),(n)->()'. Seenp.vectorize for fulldetails.
To help people migrate their code bases from Python 2 to Python 3, thepython interpreter has a handy option -3, which issues warnings at runtime.One of its warnings is for integer division:
$ python -3 -c "2/3"-c:1: DeprecationWarning: classic int division
In Python 3, the new integer division semantics also apply to numpy arrays.With this version, numpy will emit a similar warning:
$ python -3 -c "import numpy as np; np.array(2)/np.array(3)"-c:1: DeprecationWarning: numpy: classic int division
Previously, it included str (bytes) and unicode on Python2, but only str(unicode) on Python3.
bitwise_and identity changed¶The previous identity was 1 with the result that all bits except the LSB weremasked out when the reduce method was used. The new identity is -1, whichshould work properly on twos complement machines as all bits will be set toone.
Generalized Ufuncs, including most of the linalg module, will now unlockthe Python global interpreter lock.
The caches innp.fft that speed up successive FFTs of the same length can nolonger grow without bounds. They have been replaced with LRU (least recentlyused) caches that automatically evict no longer needed items if either thememory size or item count limit has been reached.
Fixed several interfaces that explicitly disallowed arrays with zero-widthstring dtypes (i.e.dtype('S0') ordtype('U0'), and fixed severalbugs where such dtypes were not handled properly. In particular, changedndarray.__new__ to not implicitly convertdtype('S0') todtype('S1') (and likewise for unicode) when creating new arrays.
If the cpu supports it at runtime the basic integer ufuncs now use AVX2instructions. This feature is currently only available when compiled with GCC.
np.einsum¶np.einsum now supports theoptimize argument which will optimize theorder of contraction. For example,np.einsum would complete the chain dotexamplenp.einsum(‘ij,jk,kl->il’,a,b,c) in a single pass which wouldscale likeN^4; however, whenoptimize=Truenp.einsum will createan intermediate array to reduce this scaling toN^3 or effectivelynp.dot(a,b).dot(c). Usage of intermediate tensors to reduce scaling hasbeen applied to the general einsum summation notation. Seenp.einsum_pathfor more details.
The quicksort kind ofnp.sort andnp.argsort is now an introsort whichis regular quicksort but changing to a heapsort when not enough progress ismade. This retains the good quicksort performance while changing the worst caseruntime fromO(N^2) toO(N*log(N)).
ediff1d improved performance and subclass handling¶The ediff1d function uses an array instead on a flat iterator for thesubtraction. When to_begin or to_end is not None, the subtraction is performedin place to eliminate a copy operation. A side effect is that certainsubclasses are handled better, namely astropy.Quantity, since the completearray is created, wrapped, and then begin and end values are set, instead ofusing concatenate.
ndarray.mean for float16 arrays¶The computation of the mean of float16 arrays is now carried out in float32 forimproved precision. This should be useful in packages such as Theanowhere the precision of float16 is adequate and its smaller footprint isdesirable.
Internally, many array-like methods in fromnumeric.py were being called withpositional arguments instead of keyword arguments as their external signatureswere doing. This caused a complication in the downstream ‘pandas’ librarythat encountered an issue with ‘numpy’ compatibility. Now, all array-likemethods in this module are called with keyword arguments instead.
Previously operations on a memmap object would misleadingly return a memmapinstance even if the result was actually not memmapped. For example,arr+1 orarr+arr would return memmap instances, although no memoryfrom the output array is memmapped. Version 1.12 returns ordinary numpy arraysfrom these operations.
Also, reduction of a memmap (e.g..sum(axis=None) now returns a numpyscalar instead of a 0d memmap.
The stacklevel for python based warnings was increased so that most warningswill report the offending line of the user code instead of the line thewarning itself is given. Passing of stacklevel is now tested to ensure thatnew warnings will receive thestacklevel argument.
This causes warnings with the “default” or “module” filter to be shown oncefor every offending user code line or user module instead of only once. Onpython versions before 3.4, this can cause warnings to appear that were falselyignored before, which may be surprising especially in test suits.
Numpy 1.11.3 fixes a bug that leads to file corruption when very large filesopened in append mode are used inndarray.tofile. It supports Pythonversions 2.6 - 2.7 and 3.2 - 3.5. Wheels for Linux, Windows, and OS X can befound on PyPI.
A total of 2 people contributed to this release. People with a “+” by theirnames contributed a patch for the first time.
Numpy 1.11.2 supports Python 2.6 - 2.7 and 3.2 - 3.5. It fixes bugs andregressions found in Numpy 1.11.1 and includes several build relatedimprovements. Wheels for Linux, Windows, and OS X can be found on PyPI.
Fixes overridden by later merges and release notes updates are omitted.
Numpy 1.11.1 supports Python 2.6 - 2.7 and 3.2 - 3.5. It fixes bugs andregressions found in Numpy 1.11.0 and includes several build relatedimprovements. Wheels for Linux, Windows, and OSX can be found on pypi.
This release supports Python 2.6 - 2.7 and 3.2 - 3.5 and contains a numberof enhancements and improvements. Note also the build system changes listedbelow as they may have subtle effects.
No Windows (TM) binaries are provided for this release due to a brokentoolchain. One of the providers of Python packages for Windows (TM) is yourbest bet.
Details of these improvements can be found below.
randint.np.histogram.np.moveaxis for reordering array axes.setuptools for its builds instead of plain distutils.This fixes usage ofinstall_requires='numpy' in thesetup.py files ofprojects that depend on Numpy (see gh-6551). It potentially affects the waythat build/install methods for Numpy itself behave though. Please report anyunexpected behavior on the Numpy issue tracker.The following changes are scheduled for Numpy 1.12.0.
IndexError,e.g.,a['1','2']IndexError,e.g.,a[...,...].TypeError,e.g., inreshape,take, and specifying reduce axis.In a future release the following changes will be made.
rand function exposed innumpy.testing will be removed. Thatfunction is left over from early Numpy and was implemented using thePython random module. The random number generators fromnumpy.randomshould be used instead.ndarray.view method will only allow c_contiguous arrays to beviewed using a dtype of different size causing the last dimension tochange. That differs from the current behavior where arrays that aref_contiguous but not c_contiguous can be viewed as a dtype type ofdifferent size causing the first dimension to change.MaskedArray will return views of both dataand mask.Currently the mask is copy-on-write and changes to the mask in the slice donot propagate to the original mask. See the FutureWarnings section below fordetails.In prior versions of NumPy the experimental datetime64 type always storedtimes in UTC. By default, creating a datetime64 object from a string orprinting it would convert from or to local time:
# old behavior>>>>np.datetime64('2000-01-01T00:00:00')numpy.datetime64('2000-01-01T00:00:00-0800')# note the timezone offset -08:00
A consensus of datetime64 users agreed that this behavior is undesirableand at odds with how datetime64 is usually used (e.g., bypandas). For most use cases, a timezone naive datetimetype is preferred, similar to thedatetime.datetime type in the Pythonstandard library. Accordingly, datetime64 no longer assumes that input is inlocal time, nor does it print local times:
>>>> np.datetime64('2000-01-01T00:00:00')numpy.datetime64('2000-01-01T00:00:00')
For backwards compatibility, datetime64 still parses timezone offsets, whichit handles by converting to UTC. However, the resulting datetime is timezonenaive:
>>>np.datetime64('2000-01-01T00:00:00-08')DeprecationWarning: parsing timezone aware datetimes is deprecated;this will raise an error in the futurenumpy.datetime64('2000-01-01T08:00:00')
As a corollary to this change, we no longer prohibit casting between datetimeswith date units and datetimes with time units. With timezone naive datetimes,the rule for casting from dates to times is no longer ambiguous.
linalg.norm return type changes¶The return type of thelinalg.norm function is now floating point withoutexception. Some of the norm types previously returned integers.
The various fit functions in the numpy polynomial package no longer acceptnon-integers for degree specification.
TypeError instead ofValueError¶This behaviour mimics that of other functions such asnp.inner. If the twoarguments cannot be cast to a common type, it could have raised aTypeErrororValueError depending on their order. Now,np.dot will now alwaysraise aTypeError.
np.lib.split an empty array in the result always had dimension(0,) no matter the dimensions of the array being split. Thishas been changed so that the dimensions will be preserved. AFutureWarning for this change has been in place since Numpy 1.9 but,due to a bug, sometimes no warning was raised and the dimensions werealready preserved.% and// operators¶These operators are implemented with theremainder andfloor_dividefunctions respectively. Those functions are now based aroundfmod and arecomputed together so as to be compatible with each other and with the Pythonversions for float types. The results should be marginally more accurate oroutright bug fixes compared to the previous results, but they maydiffer significantly in cases where roundoff makes a difference in the integerreturned byfloor_divide. Some corner cases also change, for instance, NaNis always returned for both functions when the divisor is zero,divmod(1.0,inf) returns(0.0,1.0) except on MSVC 2008, anddivmod(-1.0,inf) returns(-1.0,inf).
Removed thecheck_return andinner_loop_selector members ofthePyUFuncObject struct (replacing them withreserved slotsto preserve struct layout). These were never used for anything, soit’s unlikely that any third-party code is using them either, but wemention it here for completeness.
In python 2, objects which are instances of old-style user-defined classes nolonger automatically count as ‘object’ type in the dtype-detection handler.Instead, as in python 3, they may potentially count as sequences, but only ifthey define both a__len__ and a__getitem__ method. This fixes a segfaultand inconsistency between python 2 and 3.
np.histogram now provides plugin estimators for automaticallyestimating the optimal number of bins. Passing one of [‘auto’, ‘fd’,‘scott’, ‘rice’, ‘sturges’] as the argument to ‘bins’ results in thecorresponding estimator being used.
A benchmark suite usingAirspeed Velocity has been added, converting theprevious vbench-based one. You can run the suite locally viapythonruntests.py--bench. For more details, seebenchmarks/README.rst.
A new functionnp.shares_memory that can check exactly whether twoarrays have memory overlap is added.np.may_share_memory also now hasan option to spend more effort to reduce false positives.
SkipTest andKnownFailureException exception classes are exposedin thenumpy.testing namespace. Raise them in a test function to markthe test to be skipped or mark it as a known failure, respectively.
f2py.compile has a newextension keyword parameter that allows thefortran extension to be specified for generated temp files. For instance,the files can be specifies to be*.f90. Theverbose argument isalso activated, it was previously ignored.
Adtype parameter has been added tonp.random.randintRandom ndarrays of the following types can now be generated:
np.bool,np.int8,np.uint8,np.int16,np.uint16,np.int32,np.uint32,np.int64,np.uint64,np.int_``,``np.intpThe specification is by precision rather than by C type. Hence, on someplatformsnp.int64 may be along instead oflonglong even ifthe specified dtype islonglong because the two may have the sameprecision. The resulting type depends on which C type numpy uses for thegiven precision. The byteorder specification is also ignored, thegenerated arrays are always in native byte order.
A newnp.moveaxis function allows for moving one or more array axesto a new position by explicitly providing source and destination axes.This function should be easier to use than the currentrollaxisfunction as well as providing more functionality.
Thedeg parameter of the variousnumpy.polynomial fits has beenextended to accept a list of the degrees of the terms to be included inthe fit, the coefficients of all other terms being constrained to zero.The change is backward compatible, passing a scalardeg will behaveas before.
A divmod function for float types modeled after the Python version hasbeen added to the npy_math library.
np.gradient now supports anaxis argument¶Theaxis parameter was added tonp.gradient for consistency. Itallows to specify over which axes the gradient is calculated.
np.lexsort now supports arrays with object data-type¶The function now internally calls the genericnpy_amergesort when thetype does not implement a merge-sort kind ofargsort method.
np.ma.core.MaskedArray now supports anorder argument¶When constructing a newMaskedArray instance, it can be configured withanorder argument analogous to the one when callingnp.ndarray. Theaddition of this argument allows for the proper processing of anorderargument in several MaskedArray-related utility functions such asnp.ma.core.array andnp.ma.core.asarray.
Creating a masked array withmask=True (resp.mask=False) now usesnp.ones (resp.np.zeros) to create the mask, which is faster andavoid a big memory peak. Another optimization was done to avoid a memorypeak and useless computations when printing a masked array.
ndarray.tofile now uses fallocate on linux¶The function now uses the fallocate system call to reserve sufficientdisk space on file systems that support it.
A.T@A andA@A.T¶Previously,gemm BLAS operations were used for all matrix products. Now,if the matrix product is between a matrix and its transpose, it will usesyrk BLAS operations for a performance boost. This optimization has beenextended to@,numpy.dot,numpy.inner, andnumpy.matmul.
Note: Requires the transposed and non-transposed matrices to share data.
np.testing.assert_warns can now be used as a context manager¶This matches the behavior ofassert_raises.
np.random.shuffle is now much faster for 1d ndarrays.
numpy.distutils¶The methodbuild_src.generate_a_pyrex_source will remain available; ithas been monkeypatched by users to support Cython instead of Pyrex. It’srecommended to switch to a better supported method of build Cythonextensions though.
np.broadcast can now be called with a single argument¶The resulting object in that case will simply mimic iteration overa single array. This change obsoletes distinctions like
- if len(x) == 1:
- shape = x[0].shape
- else:
- shape = np.broadcast(*x).shape
Instead,np.broadcast can be used in all cases.
np.trace now respects array subclasses¶This behaviour mimics that of other functions such asnp.diagonal andensures, e.g., that for masked arraysnp.trace(ma) andma.trace() givethe same result.
np.dot now raisesTypeError instead ofValueError¶This behaviour mimics that of other functions such asnp.inner. If the twoarguments cannot be cast to a common type, it could have raised aTypeErrororValueError depending on their order. Now,np.dot will now alwaysraise aTypeError.
linalg.norm return type changes¶Thelinalg.norm function now does all its computations in floating pointand returns floating results. This change fixes bugs due to integer overflowand the failure of abs with signed integers of minimum value, e.g., int8(-128).For consistency, floats are used even where an integer might work.
The F_CONTIGUOUS flag was used to signal that views using a dtype thatchanged the element size would change the first index. This was alwaysproblematical for arrays that were both F_CONTIGUOUS and C_CONTIGUOUSbecause C_CONTIGUOUS took precedence. Relaxed stride checking results inmore such dual contiguous arrays and breaks some existing code as a result.Note that this also affects changing the dtype by assigning to the dtypeattribute of an array. The aim of this deprecation is to restrict views toC_CONTIGUOUS arrays at some future time. A work around that is backwardcompatible is to usea.T.view(...).T instead. A parameter may also beadded to the view method to explicitly ask for Fortran order views, butthat will not be backward compatible.
It is currently possible to pass in arguments for theorderparameter in methods likearray.flatten orarray.ravelthat were not one of the following: ‘C’, ‘F’, ‘A’, ‘K’ (note thatall of these possible values are both unicode and case insensitive).Such behavior will not be allowed in future releases.
testing namespace¶The Python standard library random number generator was previously exposedin thetesting namespace astesting.rand. Using this generator isnot recommended and it will be removed in a future release. Use generatorsfromnumpy.random namespace instead.
In accordance with the Python C API, which gives preference to the half-openinterval over the closed one,np.random.random_integers is beingdeprecated in favor of callingnp.random.randint, which has beenenhanced with thedtype parameter as described under “New Features”.However,np.random.random_integers will not be removed anytime soon.
MaskedArray¶Currently a slice of a masked array contains a view of the original data and acopy-on-write view of the mask. Consequently, any changes to the slice’s maskwill result in a copy of the original mask being made and that new mask beingchanged rather than the original. For example, if we make a slice of theoriginal like so,view=original[:], then modifications to the data in onearray will affect the data of the other but, because the mask will be copiedduring assignment operations, changes to the mask will remain local. A similarsituation occurs when explicitly constructing a masked array usingMaskedArray(data,mask), the returned array will contain a view ofdatabut the mask will be a copy-on-write view ofmask.
In the future, these cases will be normalized so that the data and mask arraysare treated the same way and modifications to either will propagate betweenviews. In 1.11, numpy will issue aMaskedArrayFutureWarning warningwhenever user code modifies the mask of a view that in the future may causevalues to propagate back to the original. To silence these warnings and makeyour code robust against the upcoming changes, you have two options: if youwant to keep the current behavior, callmasked_view.unshare_mask() beforemodifying the mask. If you want to get the future behavior early, usemasked_view._sharedmask=False. However, note that setting the_sharedmask attribute will break following explicit calls tomasked_view.unshare_mask().
This release is a bugfix source release motivated by a segfault regression.No windows binaries are provided for this release, as there appear to bebugs in the toolchain we use to generate those files. Hopefully thatproblem will be fixed for the next release. In the meantime, we suggestusing one of the providers of windows binaries.
The following PRs have been merged into 1.10.4. When the PR is a backport,the PR number for the original PR against master is listed.
N/A this release did not happen due to various screwups involving PyPi.
This release deals with a number of bugs that turned up in 1.10.1 andadds various build and release improvements.
Numpy 1.10.1 supports Python 2.6 - 2.7 and 3.2 - 3.5.
There were back compatibility problems involving views changing the dtype ofmultidimensional Fortran arrays that need to be dealt with over a longertimeframe.
numpy.i¶Relaxed stride checking revealed a bug inarray_is_fortran(a), that wasusing PyArray_ISFORTRAN to check for Fortran contiguity instead ofPyArray_IS_F_CONTIGUOUS. You may want to regenerate swigged files using theupdated numpy.i
This deprecates assignment of a new descriptor to the dtype attribute ofa non-C-contiguous array if it result in changing the shape. Thiseffectively bars viewing a multidimensional Fortran array using a dtypethat changes the element size along the first axis.
The reason for the deprecation is that, when relaxed strides checking isenabled, arrays that are both C and Fortran contiguous are always treatedas C contiguous which breaks some code that depended the two being mutuallyexclusive for non-scalar arrays of ndim > 1. This deprecation prepares theway to always enable relaxed stride checking.
The following PRs have been merged into 1.10.2. When the PR is a backport,the PR number for the original PR against master is listed.
Initial support for mingwpy was reverted as it was causing problems fornon-windows builds.
A fix for np.lib.split was reverted because it resulted in “fixing”behavior that will be present in the Numpy 1.11 and that was alreadypresent in Numpy 1.9. See the discussion of the issue at gh-6575 forclarification.
Relaxed stride checking was reverted. There were back compatibilityproblems involving views changing the dtype of multidimensional Fortranarrays that need to be dealt with over a longer timeframe.
A bug in the Numpy 1.10.1 release resulted in exceptions being raised forRuntimeWarning andDeprecationWarning in projects depending on Numpy.That has been fixed.
This release deals with a few build problems that showed up in 1.10.0. Mostusers would not have seen these problems. The differences are:
Numpy 1.10.1 supports Python 2.6 - 2.7 and 3.2 - 3.5.
Commits:
45a3d84 DEP: Remove warning forfull when dtype is set.0c1a5df BLD: import setuptools to allow compile with VS2008 python2.7 sdk04211c6 BUG: mask nan to 1 in ordered compare826716f DOC: Document the reason msvc requires SSE2 on 32 bit platforms.49fa187 BLD: enable SSE2 for 32-bit msvc 9 and 10 compilersdcbc4cc MAINT: remove Wreturn-type warnings from config checksd6564cb BLD: do not build exclusively for SSE4.2 processors15cb66f BLD: do not build exclusively for SSE4.2 processorsc38bc08 DOC: fix var. reference in percentile docstring78497f4 DOC: Sync 1.10.0-notes.rst in 1.10.x branch with master.
This release supports Python 2.6 - 2.7 and 3.2 - 3.5.
skiprows andmissing removed from np.genfromtxt.old_behavior removed from np.correlate.arr1==arr2, many corner casesinvolving strings or structured dtypes that used to return scalarsnow issueFutureWarning orDeprecationWarning, and in thefuture will be change to either perform elementwise comparisons orraise an error.np.lib.split an empty array in the result always had dimension(0,) no matter the dimensions of the array being split. In Numpy 1.11that behavior will be changed so that the dimensions will be preserved. AFutureWarning for this change has been in place since Numpy 1.9 but,due to a bug, sometimes no warning was raised and the dimensions werealready preserved.See below for more details on these changes.
Default casting for inplace operations has changed to'same_kind'. Forinstance, if n is an array of integers, and f is an array of floats, thenn+=f will result in aTypeError, whereas in previous Numpyversions the floats would be silently cast to ints. In the unlikely casethat the example code is not an actual bug, it can be updated in a backwardcompatible way by rewriting it asnp.add(n,f,out=n,casting='unsafe').The old'unsafe' default has been deprecated since Numpy 1.7.
The numpy version string for development builds has been changed fromx.y.z.dev-githash tox.y.z.dev0+githash (note the +) in order to complywith PEP 440.
NPY_RELAXED_STRIDE_CHECKING is now true by default.
UPDATE: In 1.10.2 the default value of NPY_RELAXED_STRIDE_CHECKING waschanged to false for back compatibility reasons. More time is needed beforeit can be made the default. As part of the roadmap a deprecation ofdimension changing views of f_contiguous not c_contiguous arrays was alsoadded.
axis=0 raisesIndexError¶Using axis != 0 has raised a DeprecationWarning since NumPy 1.7, it nowraises an error.
There was inconsistent behavior betweenx.ravel() andnp.ravel(x), aswell as betweenx.diagonal() andnp.diagonal(x), with the methodspreserving subtypes while the functions did not. This has been fixed andthe functions now behave like the methods, preserving subtypes except inthe case of matrices. Matrices are special cased for backwardcompatibility and still return 1-D arrays as before. If you need topreserve the matrix subtype, use the methods instead of the functions.
Previously, a view was returned except when no change was made in the orderof the axes, in which case the input array was returned. A view is nowreturned in all cases.
Previously, an inconsistency existed between 1-D inputs (returning abase ndarray) and higher dimensional ones (which preserved subclasses).Behavior has been unified, and the return will now be a base ndarray.Subclasses can still override this behavior by providing their ownnonzero method.
The changes toswapaxes also apply to thePyArray_SwapAxes C function,which now returns a view in all cases.
The changes tononzero also apply to thePyArray_Nonzero C function,which now returns a base ndarray in all cases.
The dtype structure (PyArray_Descr) has a new member at the end to cacheits hash value. This shouldn’t affect any well-written applications.
The change to the concatenation function DeprecationWarning also affectsPyArray_ConcatenateArrays,
Previously the returned types for recarray fields accessed by attribute and byindex were inconsistent, and fields of string type were returned as chararrays.Now, fields accessed by either attribute or indexing will return an ndarray forfields of non-structured type, and a recarray for fields of structured type.Notably, this affect recarrays containing strings with whitespace, as trailingwhitespace is trimmed from chararrays but kept in ndarrays of string type.Also, the dtype.type of nested structured fields is now inherited.
Viewing an ndarray as a recarray now automatically converts the dtype tonp.record. See new record array documentation. Additionally, viewing a recarraywith a non-structured dtype no longer converts the result’s type to ndarray -the result will remain a recarray.
When using the ‘out’ keyword argument of a ufunc, a tuple of arrays, one perufunc output, can be provided. For ufuncs with a single output a single arrayis also a valid ‘out’ keyword argument. Previously a single array could beprovided in the ‘out’ keyword argument, and it would be used as the firstoutput for ufuncs with multiple outputs, is deprecated, and will result in aDeprecationWarning now and an error in the future.
Indexing an ndarray using a byte-string in Python 3 now raises an IndexErrorinstead of a ValueError.
For such (rare) masked arrays, getting a single masked item no longer returns acorrupted masked array, but a fully masked version of the item.
Similar to mean, median and percentile now emits a Runtime warning andreturnsNaN in slices where aNaN is present.To compute the median or percentile while ignoring invalid values use thenewnanmedian ornanpercentile functions.
All functions from numpy.testing were once available fromnumpy.ma.testutils but not all of them were redefined to work with maskedarrays. Most of those functions have now been removed fromnumpy.ma.testutils with a small subset retained in order to preservebackward compatibility. In the long run this should help avoid mistaken useof the wrong functions, but it may cause import problems for some.
Previously customization of compilation of dependency libraries and numpyitself was only accomblishable via code changes in the distutils package.Now numpy.distutils reads in the following extra flags from each group of thesite.cfg:
runtime_library_dirs/rpath, sets runtime library directories to overrideLD_LIBRARY_PATHextra_compile_args, add extra flags to the compilation of sourcesextra_link_args, add extra flags when linking librariesThis should, at least partially, complete user customization.
np.cbrt wraps the C99 cube root functioncbrt.Compared tonp.power(x, 1./3.) it is well defined for negative real floatsand a bit faster.
By passing–parallel=n or-j n tosetup.py build the compilation ofextensions is now performed inn parallel processes.The parallelization is limited to files within one extension so projects usingCython will not profit because it builds extensions from single files.
max_rows argument¶Amax_rows argument has been added togenfromtxt to limit thenumber of rows read in a single call. Using this functionality, it ispossible to read in multiple arrays stored in a single file by makingrepeated calls to the function.
np.broadcast_to manually broadcasts an array to a given shape according tonumpy’s broadcasting rules. The functionality is similar to broadcast_arrays,which in fact has been rewritten to use broadcast_to internally, but only asingle array is necessary.
When Python emits a warning, it records that this warning has been emitted inthe module that caused the warning, in a module attribute__warningregistry__. Once this has happened, it is not possible to emitthe warning again, unless you clear the relevant entry in__warningregistry__. This makes is hard and fragile to test warnings,because if your test comes after another that has already caused the warning,you will not be able to emit the warning or test it. The context managerclear_and_catch_warnings clears warnings from the module registry on entryand resets them on exit, meaning that warnings can be re-raised.
fweights andaweights arguments¶Thefweights andaweights arguments add new functionality tocovariance calculations by applying two types of weighting to observationvectors. An array offweights indicates the number of repeats of eachobservation vector, and an array ofaweights provides their relativeimportance or probability.
Python 3.5 adds support for a matrix multiplication operator ‘@’ proposedin PEP465. Preliminary support for that has been implemented, and anequivalent functionmatmul has also been added for testing purposes anduse in earlier Python versions. The function is preliminary and the orderand number of its optional arguments can be expected to change.
norm to fft functions¶The default normalization has the direct transforms unscaled and the inversetransforms are scaled by1/n. It is possible to obtain unitarytransforms by setting the keyword argumentnorm to"ortho" (default isNone) so that both direct and inverse transforms will be scaled by1/\\sqrt{n}.
np.digitize is now implemented in terms ofnp.searchsorted. This meansthat a binary search is used to bin the values, which scales much betterfor larger number of bins than the previous linear search. It also removesthe requirement for the input array to be 1-dimensional.
np.poly will now cast 1-dimensional input arrays of integer type to doubleprecision floating point, to prevent integer overflow when computing the monicpolynomial. It is still possible to obtain higher precision results bypassing in an array of object type, filled e.g. with Python ints.
np.interp now has a new parameterperiod that supplies the period of theinput dataxp. In such case, the input data is properly normalized to thegiven period and one end point is added to each extremity ofxp in order toclose the previous and the next period cycles, resulting in the correctinterpolation behavior.
pad_width andconstant_values¶constant_values parameters now accepts NumPy arrays and float values.NumPy arrays are supported as input forpad_width, and an exception israised if its values are not of integral type.
out argument¶Theout parameter was added tonp.argmax andnp.argmin for consistencywithndarray.argmax andndarray.argmin. The new parameter behaves exactlyas it does in those methods.
All of the functionsincomplex.h are now detected. There are newfallback implementations of the following functions.
As a result of these improvements, there will be some small changes inreturned values, especially for corner cases.
float.hex method¶The strings produced byfloat.hex look like0x1.921fb54442d18p+1,so this is not the hex used to represent unsigned integer types.
In order to properly handle minimal values of integer types,np.isclose willnow cast to the float dtype during comparisons. This aligns its behavior withwhat was provided bynp.allclose.
np.allclose now usesnp.isclose internally and inherits the ability tocompare NaNs as equal by settingequal_nan=True. Subclasses, such asnp.ma.MaskedArray, are also preserved now.
np.genfromtxt now correctly handles integers larger than2**31-1 on32-bit systems and larger than2**63-1 on 64-bit systems (it previouslycrashed with anOverflowError in these cases). Integers larger than2**63-1 are converted to floating-point values.
The functionsnp.load andnp.save have additional keywordarguments for controlling backward compatibility of pickled Pythonobjects. This enables Numpy on Python 3 to load npy files containingobject arrays that were generated on Python 2.
Built-in assumptions that the baseclass behaved like a plain array are beingremoved. In particular, setting and getting elements and ranges will respectbaseclass overrides of__setitem__ and__getitem__, and arithmeticwill respect overrides of__add__,__sub__, etc.
The cblas versions of dot, inner, and vdot have been integrated intothe multiarray module. In particular, vdot is now a multiarray function,which it was not before.
Inputs to generalized universal functions are now more strictly checkedagainst the function’s signature: all core dimensions are now required tobe present in input arrays; core dimensions with the same label must havethe exact same size; and output core dimension’s must be specified, eitherby a same label input core dimension or by a passed-in output array.
Views returned bynp.einsum will now be writeable whenever the inputarray is writeable.
np.argmin now skips NaT values in datetime64 and timedelta64 arrays,making it consistent withnp.min,np.argmax andnp.max.
Normally, comparison operations on arrays perform elementwisecomparisons and return arrays of booleans. But in some corner cases,especially involving strings are structured dtypes, NumPy hashistorically returned a scalar instead. For example:
### Current behaviournp.arange(2)=="foo"# -> Falsenp.arange(2)<"foo"# -> True on Python 2, error on Python 3np.ones(2,dtype="i4,i4")==np.ones(2,dtype="i4,i4,i4")# -> False
Continuing work started in 1.9, in 1.10 these comparisons will nowraiseFutureWarning orDeprecationWarning, and in the futurethey will be modified to behave more consistently with othercomparison operations, e.g.:
### Future behaviournp.arange(2)=="foo"# -> array([False, False])np.arange(2)<"foo"# -> error, strings and numbers are not orderablenp.ones(2,dtype="i4,i4")==np.ones(2,dtype="i4,i4,i4")# -> [False, False]
The SafeEval class in numpy/lib/utils.py is deprecated and will be removedin the next release.
The alterdot and restoredot functions no longer do anything, and aredeprecated.
These ways of loading packages are now deprecated.
The values for thebias andddof arguments to thecorrcoeffunction canceled in the division implied by the correlation coefficient andso had no effect on the returned values.
We now deprecate these arguments tocorrcoef and the masked array versionma.corrcoef.
Because we are deprecating thebias argument toma.corrcoef, we alsodeprecate the use of theallow_masked argument as a positional argument,as its position will change with the removal ofbias.allow_maskedwill in due course become a keyword-only argument.
Since 1.6, creating a dtype object from its string representation, e.g.'f4', would issue a deprecation warning if the size did not correspondto an existing type, and default to creating a dtype of the default sizefor the type. Starting with this release, this will now raise aTypeError.
The only exception is object dtypes, where both'O4' and'O8' willstill issue a deprecation warning. This platform-dependent representationwill raise an error in the next release.
In preparation for this upcoming change, the string representation of anobject dtype, i.e.np.dtype(object).str, no longer includes the itemsize, i.e. will return'|O' instead of'|O4' or'|O8' asbefore.
This is a bugfix only release in the 1.9.x series.
This is a bugfix only release in the 1.9.x series.
This release supports Python 2.6 - 2.7 and 3.2 - 3.4.
In NumPy 1.8, the diagonal and diag functions returned readonly copies, inNumPy 1.9 they return readonly views, and in 1.10 they will return writeableviews.
In previous numpy versions operations involving floating point scalarscontaining special valuesNaN,Inf and-Inf caused the resulttype to be at leastfloat64. As the special values can be representedin the smallest available floating point type, the upcast is not performedanymore.
For example the dtype of:
np.array([1.],dtype=np.float32)*float('nan')
now remainsfloat32 instead of being cast tofloat64.Operations involving non-special values have not been changed.
If given more than one percentile to compute numpy.percentile returns anarray instead of a list. A single percentile still returns a scalar. Thearray is equivalent to converting the list returned in older versionsto an array vianp.array.
If theoverwrite_input option is used the input is only partiallyinstead of fully sorted.
Alltofile exceptions are nowIOError, some were previouslyValueError.
Two changes to numpy.ma.core._check_fill_value:
This may cause problems with folks who depended on the polynomial classesbeing derived from PolyBase. They are now all derived from the abstractbase class ABCPolyBase. Strictly speaking, there should be a deprecationinvolved, but no external code making use of the old baseclass could befound.
A bug in one of the algorithms to generate a binomial random variate hasbeen fixed. This change will likely alter the number of random drawsperformed, and hence the sequence location will be different after acall to distribution.c::rk_binomial_btpe. Any tests which rely on the RNGbeing in a known state should be checked and/or updated as a result.
np.random.seed andnp.random.RandomState now throw aValueErrorif the seed cannot safely be converted to 32 bit unsigned integers.Applications that now fail can be fixed by masking the higher 32 bit values tozero:seed=seed&0xFFFFFFFF. This is what is done silently in olderversions so the random stream remains the same.
Theout argument tonp.argmin andnp.argmax and theirequivalent C-API functions is now checked to match the desired output shapeexactly. If the check fails aValueError instead ofTypeError israised.
Remove unnecessary broadcasting notation restrictions.np.einsum('ijk,j->ijk',A,B) can also be written asnp.einsum('ij...,j->ij...',A,B) (ellipsis is no longer required on ‘j’)
The NumPy indexing has seen a complete rewrite in this version. This makesmost advanced integer indexing operations much faster and should have noother implications. However some subtle changes and deprecations wereintroduced in advanced indexing operations:
array(1)[array(True)] givesarray([1]) andnot the original array.arr.flat[index]=values, which uses the old code branch. (forexamplea=np.ones(10);a[np.arange(10)]=[1,2,3])arr[[0,0],[1,1]]=[1,2], which may setarr[0,1] to either 1 or 2.arr[...])True) will be a legalboolean index. At this time, this isalready the case for scalar arrays to allow the generalpositive=a[a>0] to work whena is zero dimensional.array(True) andarray(False) equivalent to 1 and 0 if the result of the operationwas a scalar. This will raise an error in NumPy 1.9 and, as notedabove, treated as a boolean index in the future.IndexError)...) is deprecated.Non-integer axis indexes to reduction ufuncs likeadd.reduce orsum aredeprecated.
promote_types and string dtype¶promote_types function now returns a valid string length when given aninteger or float dtype as one argument and a string dtype as anotherargument. Previously it always returned the input string dtype, even if itwasn’t long enough to store the max integer/float value converted to astring.
can_cast and string dtype¶can_cast function now returns False in “safe” casting mode forinteger/float dtype and string dtype if the string dtype length is not longenough to store the max integer/float value converted to a string.Previouslycan_cast in “safe” mode returned True for integer/floatdtype and a string dtype of any length.
Theastype method now returns an error if the string dtype to cast tois not long enough in “safe” casting mode to hold the max value ofinteger/float array that is being casted. Previously the casting wasallowed even if the result was truncated.
npyio.recfromcsv no longer accepts the undocumentedupdate keyword,which used to override thedtype keyword.
doc/swig directory moved¶Thedoc/swig directory has been moved totools/swig.
npy_3kcompat.h header changed¶The unusedsimple_capsule_dtor function has been removed fromnpy_3kcompat.h. Note that this header is not meant to be used outsideof numpy; other projects should be using their own copy of this file whenneeded.
sq_item andsq_ass_item sequence methods¶When directly accessing thesq_item orsq_ass_item PyObject slotsfor item getting, negative indices will not be supported anymore.PySequence_GetItem andPySequence_SetItem however fix negativeindices so that they can be used there.
WhenNpyIter_RemoveAxis is now called, the iterator range will be reset.
When a multi index is being tracked and an iterator is not buffered, it ispossible to useNpyIter_RemoveAxis. In this case an iterator can shrinkin size. Because the total size of an iterator is limited, the iteratormay be too large before these calls. In this case its size will be set to-1and an error issued not at construction time but when removing the multiindex, setting the iterator range, or getting the next function.
This has no effect on currently working code, but highlights the necessityof checking for an error return if these conditions can occur. In mostcases the arrays being iterated are as large as the iterator so that sucha problem cannot occur.
This change was already applied to the 1.8.1 release.
zeros_like for string dtypes now returns empty strings¶To match thezeros functionzeros_like now returns an array initializedwith empty strings instead of an array filled with‘0’.
np.percentile now has the interpolation keyword argument to specify inwhich way points should be interpolated if the percentiles fall between twovalues. See the documentation for the available options.
np.median andnp.percentile now support generalized axis arguments likeufunc reductions do since 1.7. One can now say axis=(index, index) to pick alist of axes for the reduction. Thekeepdims keyword argument was alsoadded to allow convenient broadcasting to arrays of the original shape.
np.linspace andnp.logspace¶The returned data type from thelinspace andlogspace functions cannow be specified using the dtype parameter.
np.triu andnp.tril broadcasting¶For arrays withndim exceeding 2, these functions will now apply to thefinal two axes instead of raising an exception.
tobytes alias fortostring method¶ndarray.tobytes andMaskedArray.tobytes have been added as aliasesfortostring which exports arrays asbytes. This is more consistentin Python 3 wherestr andbytes are not the same.
Added experimental support for the ppc64le and OpenRISC architecture.
numbers module¶All numerical numpy types are now registered with the type hierarchy inthe pythonnumbers module.
increasing parameter added tonp.vander¶The ordering of the columns of the Vandermonde matrix can be specified withthis new boolean argument.
unique_counts parameter added tonp.unique¶The number of times each unique item comes up in the input can now beobtained as an optional return value.
Thenp.nanmedian andnp.nanpercentile functions behave likethe median and percentile functions except that NaNs are ignored.
The class may be imported from numpy.lib and can be used for versioncomparison when the numpy version goes to 1.10.devel. For example:
>>>fromnumpy.libimportNumpyVersion>>>ifNumpyVersion(np.__version__)<'1.10.0'):...print('Wow, that is an old NumPy version!')
The numpy storage format 1.0 only allowed the array header to have a total sizeof 65535 bytes. This can be exceeded by structured arrays with a large numberof columns. A new format 2.0 has been added which extends the header size to 4GiB.np.save will automatically save in 2.0 format if the data requires it,else it will always use the more compatible 1.0 format.
np.cross¶np.cross now properly broadcasts its two input arrays, even if theyhave different number of dimensions. In earlier versions this would resultin either an error being raised, or wrong results computed.
Pairwise summation is now used in the sum method, but only along the fastaxis and for groups of the values <= 8192 in length. This should alsoimprove the accuracy of var and std in some common cases.
np.partition¶np.percentile has been implemented in terms ofnp.partition whichonly partially sorts the data via a selection algorithm. This improves thetime complexity fromO(nlog(n)) toO(n).
np.array¶The performance of converting lists containing arrays to arrays usingnp.array has been improved. It is now equivalent in speed tonp.vstack(list).
np.searchsorted¶For the built-in numeric types,np.searchsorted no longer relies on thedata type’scompare function to perform the search, but is nowimplemented by type specific functions. Depending on the size of theinputs, this can result in performance improvements over 2x.
Setnumpy.distutils.system_info.system_info.verbosity=0 and thencalls tonumpy.distutils.system_info.get_info('blas_opt') will notprint anything on the output. This is mostly for other packages usingnumpy.distutils.
np.random.multivariate_normal¶ARuntimeWarning warning is raised when the covariance matrix is notpositive-semidefinite.
The polynomial classes have been refactored to use an abstract base classrather than a template in order to implement a common interface. This makesimporting the polynomial package faster as the classes do not need to becompiled on import.
Several more functions now release the Global Interpreter Lock allowing moreefficient parallelization using thethreading module. Most notably the GIL isnow released for fancy indexing,np.where and therandom module nowuses a per-state lock instead of the GIL.
Built-in assumptions that the baseclass behaved like a plain array are beingremoved. In particalur,repr andstr should now work more reliably.
Using non-integer numpy scalars to repeat python sequences is deprecated.For examplenp.float_(2)*[1] will be an error in the future.
select input deprecations¶The integer and empty input toselect is deprecated. In the future onlyboolean arrays will be valid conditions and an emptycondlist will beconsidered an input error instead of returning the default.
rank function¶Therank function has been deprecated to avoid confusion withnumpy.linalg.matrix_rank.
In the future object array comparisons both== andnp.equal will notmake use of identity checks anymore. For example:
>>>a=np.array([np.array([1,2,3]),1])>>>b=np.array([np.array([1,2,3]),1])>>>a==b
will consistently return False (and in the future an error) even if the arrayina andb was the same object.
The equality operator== will in the future raise errors likenp.equalif broadcasting or element comparisons, etc. fails.
Comparison witharr == None will in the future do an elementwise comparisoninstead of just returning False. Code should be usingarr is None.
All of these changes will give Deprecation- or FutureWarnings at this time.
The utility function npy_PyFile_Dup and npy_PyFile_DupClose are broken by theinternal buffering python 3 applies to its file objects.To fix this two new functions npy_PyFile_Dup2 and npy_PyFile_DupClose2 aredeclared in npy_3kcompat.h and the old functions are deprecated.Due to the fragile nature of these functions it is recommended to instead usethe python API when possible.
This change was already applied to the 1.8.1 release.
This is a bugfix only release in the 1.8.x series.
This is a bugfix only release in the 1.8.x series.
WhenNpyIter_RemoveAxis is now called, the iterator range will be reset.
When a multi index is being tracked and an iterator is not buffered, it ispossible to useNpyIter_RemoveAxis. In this case an iterator can shrinkin size. Because the total size of an iterator is limited, the iteratormay be too large before these calls. In this case its size will be set to-1and an error issued not at construction time but when removing the multiindex, setting the iterator range, or getting the next function.
This has no effect on currently working code, but highlights the necessityof checking for an error return if these conditions can occur. In mostcases the arrays being iterated are as large as the iterator so that sucha problem cannot occur.
Setnumpy.distutils.system_info.system_info.verbosity=0 and thencalls tonumpy.distutils.system_info.get_info('blas_opt') will notprint anything on the output. This is mostly for other packages usingnumpy.distutils.
The utility function npy_PyFile_Dup and npy_PyFile_DupClose are broken by theinternal buffering python 3 applies to its file objects.To fix this two new functions npy_PyFile_Dup2 and npy_PyFile_DupClose2 aredeclared in npy_3kcompat.h and the old functions are deprecated.Due to the fragile nature of these functions it is recommended to instead usethe python API when possible.
This release supports Python 2.6 -2.7 and 3.2 - 3.3.
.at method.partition function, partial sorting via selection for fast median.nanmean,nanvar, andnanstd functions skipping NaNs.full andfull_like functions to create value initialized arrays.PyUFunc_RegisterLoopForDescr, better ufunc support for user dtypes.Support for Python versions 2.4 and 2.5 has been dropped,
Support for SCons has been removed.
The Datetime64 type remains experimental in this release. In 1.9 there willprobably be some changes to make it more useable.
The diagonal method currently returns a new array and raises aFutureWarning. In 1.9 it will return a readonly view.
Multiple field selection from an array of structured type currentlyreturns a new array and raises a FutureWarning. In 1.9 it will return areadonly view.
The numpy/oldnumeric and numpy/numarray compatibility modules will beremoved in 1.9.
The doc/sphinxext content has been moved into its own github repository,and is included in numpy as a submodule. See the instructions indoc/HOWTO_BUILD_DOCS.rst.txt for how to access the content.
The hash function of numpy.void scalars has been changed. Previously thepointer to the data was hashed as an integer. Now, the hash function usesthe tuple-hash algorithm to combine the hash functions of the elements ofthe scalar, but only if the scalar is read-only.
Numpy has switched its build system to using ‘separate compilation’ bydefault. In previous releases this was supported, but not default. Thisshould produce the same results as the old system, but if you’re trying todo something complicated like link numpy statically or using an unusualcompiler, then it’s possible you will encounter problems. If so, pleasefile a bug and as a temporary workaround you can re-enable the old buildsystem by exporting the shell variable NPY_SEPARATE_COMPILATION=0.
For the AdvancedNew iterator theoa_ndim flag should now be -1 to indicatethat noop_axes anditershape are passed in. Theoa_ndim==0case, now indicates a 0-D iteration andop_axes being NULL and the oldusage is deprecated. This does not effect theNpyIter_New orNpyIter_MultiNew functions.
The functions nanargmin and nanargmax now return np.iinfo[‘intp’].min forthe index in all-NaN slices. Previously the functions would raise a ValueErrorfor array returns and NaN for scalar returns.
There is a new compile time environment variableNPY_RELAXED_STRIDES_CHECKING. If this variable is set to 1, thennumpy will consider more arrays to be C- or F-contiguous – forexample, it becomes possible to have a column vector which isconsidered both C- and F-contiguous simultaneously. The new definitionis more accurate, allows for faster code that makes fewer unnecessarycopies, and simplifies numpy’s code internally. However, it may alsobreak third-party libraries that make too-strong assumptions about thestride values of C- and F-contiguous arrays. (It is also currentlyknown that this breaks Cython code using memoryviews, which will befixed in Cython.) THIS WILL BECOME THE DEFAULT IN A FUTURE RELEASE, SOPLEASE TEST YOUR CODE NOW AGAINST NUMPY BUILT WITH:
NPY_RELAXED_STRIDES_CHECKING=1pythonsetup.pyinstall
You can check whether NPY_RELAXED_STRIDES_CHECKING is in effect byrunning:
np.ones((10,1),order="C").flags.f_contiguous
This will beTrue if relaxed strides checking is enabled, andFalse otherwise. The typical problem we’ve seen so far is C codethat works with C-contiguous arrays, and assumes that the itemsize canbe accessed by looking at the last element in thePyArray_STRIDES(arr)array. When relaxed strides are in effect, this is not true (and infact, it never was true in some corner cases). Instead, usePyArray_ITEMSIZE(arr).
For more information check the “Internal memory layout of an ndarray”section in the documentation.
Binary operations of the form<array-or-subclass>*<non-array-subclass>where<non-array-subclass> declares an__array_priority__ higher thanthat of<array-or-subclass> will now unconditionally returnNotImplemented, giving<non-array-subclass> a chance to handle theoperation. Previously,NotImplemented would only be returned if<non-array-subclass> actually implemented the reversed operation, and aftera (potentially expensive) array conversion of<non-array-subclass> had beenattempted. (bug,pull request)
Ifmedian is used withoverwrite_input option the input array will now onlybe partially sorted instead of fully sorted.
The npv function had a bug. Contrary to what the documentation stated, itsummed from indexes1 toM instead of from0 toM-1. Thefix changes the returned value. The mirr function called the npv function,but worked around the problem, so that was also fixed and the return valueof the mirr function remains unchanged.
ComparingNaN floating point numbers now raises theinvalid runtimewarning. If aNaN is expected the warning can be ignored using np.errstate.E.g.:
withnp.errstate(invalid='ignore'):operation()
The gufunc machinery is now used for np.linalg, allowing operations onstacked arrays and vectors. For example:
>>>aarray([[[ 1., 1.], [ 0., 1.]], [[ 1., 1.], [ 0., 1.]]])>>>np.linalg.inv(a)array([[[ 1., -1.], [ 0., 1.]], [[ 1., -1.], [ 0., 1.]]])
The functionat has been added to ufunc objects to allow in placeufuncs with no buffering when fancy indexing is used. For example, thefollowing will increment the first and second items in the array, and willincrement the third item twice:numpy.add.at(arr,[0,1,2,2],1)
This is what many have mistakenly thoughtarr[[0,1,2,2]]+=1 would do,but that does not work as the incremented value ofarr[2] is simply copiedinto the third slot inarr twice, not incremented twice.
New functions to partially sort arrays via a selection algorithm.
Apartition by indexk moves thek smallest element to the front ofan array. All elements beforek are then smaller or equal than the valuein positionk and all elements followingk are then greater or equalthan the value in positionk. The ordering of the values within thesebounds is undefined.A sequence of indices can be provided to sort all of them into their sortedposition at once iterative partitioning.This can be used to efficiently obtain order statistics like median orpercentiles of samples.partition has a linear time complexity ofO(n) while a full sort hasO(nlog(n)).
New nan aware statistical functions are added. In these functions theresults are what would be obtained if nan values were omitted from allcomputations.
New convenience functions to create arrays filled with a specific value;complementary to the existingzeros andzeros_like functions.
Large NPZ files >2GB can be loaded on 64-bit systems.
It is now possible to build numpy against OpenBLAS by editing site.cfg.
Euler’s constant is now exposed in numpy as euler_gamma.
New modes ‘complete’, ‘reduced’, and ‘raw’ have been added to the qrfactorization and the old ‘full’ and ‘economic’ modes are deprecated.The ‘reduced’ mode replaces the old ‘full’ mode and is the default as wasthe ‘full’ mode, so backward compatibility can be maintained by notspecifying the mode.
The ‘complete’ mode returns a full dimensional factorization, which can beuseful for obtaining a basis for the orthogonal complement of the rangespace. The ‘raw’ mode returns arrays that contain the Householderreflectors and scaling factors that can be used in the future to apply qwithout needing to convert to a matrix. The ‘economic’ mode is simplydeprecated, there isn’t much use for it and it isn’t any more efficientthan the ‘raw’ mode.
The functionin1d now accepts ainvert argument which, whenTrue,causes the returned array to be inverted.
It is now possible to usenp.newaxis/None together with indexarrays instead of only in simple indices. This means thatarray[np.newaxis,[0,1]] will now work as expected and select the firsttwo rows while prepending a new axis to the array.
New ufuncs can now be registered with builtin input types and a customoutput type. Before this change, NumPy wouldn’t be able to find the rightufunc loop function when the ufunc was called from Python, because the ufuncloop signature matching logic wasn’t looking at the output operand type.Now the correct ufunc loop is found, as long as the user provides an outputargument with the correct output type.
A simple test runner scriptruntests.py was added. It also builds Numpy viasetup.pybuild and can be used to run tests easily during development.
Performance in reading large files was improved by chunking (see also IO compatibility).
Thepad function has a new implementation, greatly improving performance forall inputs exceptmode= (retained for backwards compatibility).Scaling with dimensionality is dramatically improved for rank >= 4.
isnan,isinf,isfinite andbyteswap have been improved to takeadvantage of compiler builtins to avoid expensive calls to libc.This improves performance of these operations by about a factor of two on gnulibc systems.
Several functions have been optimized to make use of SSE2 CPU SIMD instructions.
This improves performance of these operations up to 4x/2x for float32/float64and up to 10x for bool depending on the location of the data in the CPU caches.The performance gain is greatest for in-place operations.
In order to use the improved functions the SSE2 instruction set must be enabledat compile time. It is enabled by default on x86_64 systems. On x86_32 with acapable CPU it must be enabled by passing the appropriate flag to the CFLAGSbuild variable (-msse2 with gcc).
median is now implemented in terms ofpartition instead ofsort whichreduces its time complexity from O(n log(n)) to O(n).If used with theoverwrite_input option the array will now only be partiallysorted instead of fully sorted.
When creating a ufunc, the default ufunc operand flags can be overriddenvia the new op_flags attribute of the ufunc object. For example, to setthe operand flag for the first input to read/write:
PyObject *ufunc = PyUFunc_FromFuncAndData(…);ufunc->op_flags[0] = NPY_ITER_READWRITE;
This allows a ufunc to perform an operation in place. Also, global nditer flagscan be overridden via the new iter_flags attribute of the ufunc object.For example, to set the reduce flag for a ufunc:
ufunc->iter_flags = NPY_ITER_REDUCE_OK;
The function np.take now allows 0-d arrays as indices.
The separate compilation mode is now enabled by default.
Several changes to np.insert and np.delete:
Padded regions from np.pad are now correctly rounded, not truncated.
Four new functions have been added to the array C-API.
One new function has been added to the ufunc C-API that allows to registeran inner loop for user types using the descr.
ThePyArray_Type instance creation functiontp_new nowusestp_basicsize to determine how much memory to allocate.In previous releases onlysizeof(PyArrayObject) bytes ofmemory were allocated, often requiring C-API subtypes toreimplementtp_new.
The ‘full’ and ‘economic’ modes of qr factorization are deprecated.
The use of non-integer for indices and most integer arguments has beendeprecated. Previously float indices and function arguments such as axes orshapes were truncated to integers without warning. For examplearr.reshape(3., -1) orarr[0.] will trigger a deprecation warning inNumPy 1.8., and in some future version of NumPy they will raise an error.
This release contains work by the following people who contributed at leastone patch to this release. The names are in alphabetical order by first name:
A total of 119 people contributed to this release.People with a “+” by their names contributed a patch for the first time.
This is a bugfix only release in the 1.7.x series.It supports Python 2.4 - 2.7 and 3.1 - 3.3 and is the last series thatsupports Python 2.4 - 2.5.
This is a bugfix only release in the 1.7.x series.It supports Python 2.4 - 2.7 and 3.1 - 3.3 and is the last series thatsupports Python 2.4 - 2.5.
This release includes several new features as well as numerous bug fixes andrefactorings. It supports Python 2.4 - 2.7 and 3.1 - 3.3 and is the lastrelease that supports Python 2.4 - 2.5.
where= parameter to ufuncs (allows the use of boolean arrays to choosewhere a computation should be done)vectorize improvements (added ‘excluded’ and ‘cache’ keyword, generalcleanup and bug fixes)numpy.random.choice (random sample generating function)In a future version of numpy, the functions np.diag, np.diagonal, and thediagonal method of ndarrays will return a view onto the original array,instead of producing a copy as they do now. This makes a difference if youwrite to the array returned by any of these functions. To facilitate thistransition, numpy 1.7 produces a FutureWarning if it detects that you maybe attempting to write to such an array. See the documentation fornp.diagonal for details.
Similar to np.diagonal above, in a future version of numpy, indexing arecord array by a list of field names will return a view onto the originalarray, instead of producing a copy as they do now. As with np.diagonal,numpy 1.7 produces a FutureWarning if it detects that you may be attemptingto write to such an array. See the documentation for array indexing fordetails.
In a future version of numpy, the default casting rule for UFunc out=parameters will be changed from ‘unsafe’ to ‘same_kind’. (This also appliesto in-place operations like a += b, which is equivalent to np.add(a, b,out=a).) Most usages which violate the ‘same_kind’ rule are likely bugs, sothis change may expose previously undetected errors in projects that dependon NumPy. In this version of numpy, such usages will continue to succeed,but will raise a DeprecationWarning.
Full-array boolean indexing has been optimized to use a different,optimized code path. This code path should produce the same results,but any feedback about changes to your code would be appreciated.
Attempting to write to a read-only array (one witharr.flags.writeableset toFalse) used to raise either a RuntimeError, ValueError, orTypeError inconsistently, depending on which code path was taken. It nowconsistently raises a ValueError.
The <ufunc>.reduce functions evaluate some reductions in a different orderthan in previous versions of NumPy, generally providing higher performance.Because of the nature of floating-point arithmetic, this may subtly changesome results, just as linking NumPy to a different BLAS implementationssuch as MKL can.
If upgrading from 1.5, then generally in 1.6 and 1.7 there have beensubstantial code added and some code paths altered, particularly in theareas of type resolution and buffered iteration over universal functions.This might have an impact on your code particularly if you relied onaccidental behavior in the past.
Any ufunc.reduce function call, as well as other reductions like sum, prod,any, all, max and min support the ability to choose a subset of the axes toreduce over. Previously, one could say axis=None to mean all the axes oraxis=# to pick a single axis. Now, one can also say axis=(#,#) to pick alist of axes for reduction.
There is a new keepdims= parameter, which if set to True, doesn’t throwaway the reduction axes but instead sets them to have size one. When thisoption is set, the reduction result will broadcast correctly to theoriginal operand which was reduced.
Note
The datetime API isexperimental in 1.7.0, and may undergo changesin future versions of NumPy.
There have been a lot of fixes and enhancements to datetime64 comparedto NumPy 1.6:
The notes indoc/source/reference/arrays.datetime.rst(also available in the online docs atarrays.datetime.html) should beconsulted for more details.
See the newformatter parameter of thenumpy.set_printoptionsfunction.
A generic sampling function has been added which will generate samples froma given array-like. The samples can be with or without replacement, andwith uniform or given non-uniform probabilities.
Returns a boolean array where two arrays are element-wise equal within atolerance. Both relative and absolute tolerance can be specified.
Axis keywords have been added to the integration and differentiationfunctions and a tensor keyword was added to the evaluation functions.These additions allow multi-dimensional coefficient arrays to be used inthose functions. New functions for evaluating 2-D and 3-D coefficientarrays on grids or sets of points were added together with 2-D and 3-Dpseudo-Vandermonde matrices that can be used for fitting.
A pad module containing functions for padding n-dimensional arrays has beenadded. The various private padding functions are exposed as options to apublic ‘pad’ function. Example:
pad(a,5,mode='mean')
Current modes areconstant,edge,linear_ramp,maximum,mean,median,minimum,reflect,symmetric,wrap, and<function>.
The function searchsorted now accepts a ‘sorter’ argument that is apermutation array that sorts the array to search.
Added experimental support for the AArch64 architecture.
New functionPyArray_RequireWriteable provides a consistent interfacefor checking array writeability – any C code which works with arrays whoseWRITEABLE flag is not known to be True a priori, should make sure to callthis function before writing.
NumPy C Style Guide added (doc/C_STYLE_GUIDE.rst.txt).
The function np.concatenate tries to match the layout of its input arrays.Previously, the layout did not follow any particular reason, and dependedin an undesirable way on the particular axis chosen for concatenation. Abug was also fixed which silently allowed out of bounds axis arguments.
The ufuncs logical_or, logical_and, and logical_not now follow Python’sbehavior with object arrays, instead of trying to call methods on theobjects. For example the expression (3 and ‘test’) produces the string‘test’, and now np.logical_and(np.array(3, ‘O’), np.array(‘test’, ‘O’))produces ‘test’ as well.
The.base attribute on ndarrays, which is used on views to ensure that theunderlying array owning the memory is not deallocated prematurely, nowcollapses out references when you have a view-of-a-view. For example:
a=np.arange(10)b=a[1:]c=b[1:]
In numpy 1.6,c.base isb, andc.base.base isa. In numpy 1.7,c.base isa.
To increase backwards compatibility for software which relies on the oldbehaviour of.base, we only ‘skip over’ objects which have exactly the sametype as the newly created view. This makes a difference if you usendarraysubclasses. For example, if we have a mix ofndarray andmatrix objectswhich are all views on the same originalndarray:
a=np.arange(10)b=np.asmatrix(a)c=b[0,1:]d=c[0,1:]
thend.base will beb. This is becaused is amatrix object,and so the collapsing process only continues so long as it encounters othermatrix objects. It considersc,b, anda in that order, andb is the last entry in that list which is amatrix object.
Casting rules have undergone some changes in corner cases, due to theNA-related work. In particular for combinations of scalar+scalar:
For array + scalar, the above rules just broadcast except the case whenthe array and scalars are unsigned/signed integers, then the result getsconverted to the array type (of possibly larger size) as illustrated by thefollowing examples:
>>>(np.zeros((2,),dtype=np.uint8)+np.int16(257)).dtypedtype('uint16')>>>(np.zeros((2,),dtype=np.int8)+np.uint16(257)).dtypedtype('int16')>>>(np.zeros((2,),dtype=np.int16)+np.uint32(2**17)).dtypedtype('int32')
Whether the size gets increased depends on the size of the scalar, forexample:
>>>(np.zeros((2,),dtype=np.uint8)+np.int16(255)).dtypedtype('uint8')>>>(np.zeros((2,),dtype=np.uint8)+np.int16(256)).dtypedtype('uint16')
Also acomplex128 scalar +float32 array is cast tocomplex64.
In NumPy 1.7 thedatetime64 type (M) must be constructed by explicitlyspecifying the type as the second argument (e.g.np.datetime64(2000,'Y')).
Specifying a custom string formatter with a_format array attribute isdeprecated. The newformatter keyword innumpy.set_printoptions ornumpy.array2string can be used instead.
The deprecated imports in the polynomial package have been removed.
concatenate now raises DepractionWarning for 1D arrays ifaxis!=0.Versions of numpy < 1.7.0 ignored axis argument value for 1D arrays. Weallow this for now, but in due course we will raise an error.
Direct access to the fields of PyArrayObject* has been deprecated. Directaccess has been recommended against for many releases. Expect similardeprecations for PyArray_Descr* and other core objects in the future aspreparation for NumPy 2.0.
The macros in old_defines.h are deprecated and will be removed in the nextmajor release (>= 2.0). The sed script tools/replace_old_macros.sed can beused to replace these macros with the newer versions.
You can test your code against the deprecated C API by #definingNPY_NO_DEPRECATED_API to the target version number, for exampleNPY_1_7_API_VERSION, before including any NumPy headers.
TheNPY_CHAR member of theNPY_TYPES enum is deprecated and will beremoved in NumPy 1.8. See the discussion atgh-2801 for more details.
This is a bugfix release in the 1.6.x series. Due to the delay of the NumPy1.7.0 release, this release contains far more fixes than a regular NumPy bugfixrelease. It also includes a number of documentation and build improvements.
numpy.core¶numpy.lib¶numpy.distutils¶numpy.random¶numpy.f2py¶numpy.poly¶This is a bugfix only release in the 1.6.x series.
This release includes several new features as well as numerous bug fixes andimproved documentation. It is backward compatible with the 1.5.0 release, andsupports Python 2.4 - 2.7 and 3.1 - 3.2.
This release adds support for the IEEE 754-2008 binary16 format, available asthe data typenumpy.half. Within Python, the type behaves similarly tofloat ordouble, and C extensions can add support for it with the exposedhalf-float API.
A new iterator has been added, replacing the functionality of theexisting iterator and multi-iterator with a single object and API.This iterator works well with general memory layouts different fromC or Fortran contiguous, and handles both standard NumPy andcustomized broadcasting. The buffering, automatic data typeconversion, and optional output parameters, offered byufuncs but difficult to replicate elsewhere, are now exposed by thisiterator.
numpy.polynomial¶Extend the number of polynomials available in the polynomial package. Inaddition, a newwindow attribute has been added to the classes inorder to specify the range thedomain maps to. This is mostly usefulfor the Laguerre, Hermite, and HermiteE polynomials whose natural domainsare infinite and provides a more intuitive way to get the correct mappingof values without playing unnatural tricks with the domain.
numpy.f2py¶F2py now supports wrapping Fortran 90 routines that use assumed shapearrays. Before such routines could be called from Python but thecorresponding Fortran routines received assumed shape arrays as zerolength arrays which caused unpredicted results. Thanks to LorenzHüdepohl for pointing out the correct way to interface routines withassumed shape arrays.
In addition, f2py supports now automatic wrapping of Fortran routinesthat use two argumentsize function in dimension specifications.
numpy.ravel_multi_index : Converts a multi-index tuple intoan array of flat indices, applying boundary modes to the indices.
numpy.einsum : Evaluate the Einstein summation convention. Using theEinstein summation convention, many common multi-dimensional array operationscan be represented in a simple fashion. This function provides a way computesuch summations.
numpy.count_nonzero : Counts the number of non-zero elements in an array.
numpy.result_type andnumpy.min_scalar_type : These functions exposethe underlying type promotion used by the ufuncs and other operations todetermine the types of outputs. These improve upon thenumpy.common_typeandnumpy.mintypecode which provide similar functionality but donot match the ufunc implementation.
defaulterrorhandling¶The default error handling has been change fromprint towarn forall except forunderflow, which remains asignore.
numpy.distutils¶Several new compilers are supported for building Numpy: the Portland GroupFortran compiler on OS X, the PathScale compiler suite and the 64-bit Intel Ccompiler on Linux.
numpy.testing¶The testing framework gainednumpy.testing.assert_allclose, which providesa more convenient way to compare floating point arrays thanassert_almost_equal,assert_approx_equal andassert_array_almost_equal.
CAPI¶In addition to the APIs for the new iterator and half data type, a numberof other additions have been made to the C API. The type promotionmechanism used by ufuncs is exposed viaPyArray_PromoteTypes,PyArray_ResultType, andPyArray_MinScalarType. A new enumerationNPY_CASTING has been added which controls what types of casts arepermitted. This is used by the new functionsPyArray_CanCastArrayToandPyArray_CanCastTypeTo. A more flexible way to handleconversion of arbitrary python objects into arrays is exposed byPyArray_GetArrayParamsFromObject.
The “normed” keyword innumpy.histogram is deprecated. Its functionalitywill be replaced by the new “density” keyword.
numpy.fft¶The functionsrefft,refft2,refftn,irefft,irefft2,irefftn,which were aliases for the same functions without the ‘e’ in the name, wereremoved.
numpy.memmap¶Thesync() andclose() methods of memmap were removed. Useflush() and“del memmap” instead.
numpy.lib¶The deprecated functionsnumpy.unique1d,numpy.setmember1d,numpy.intersect1d_nu andnumpy.lib.ufunclike.log2 were removed.
numpy.ma¶Several deprecated items were removed from thenumpy.ma module:
* ``numpy.ma.MaskedArray`` "raw_data" method* ``numpy.ma.MaskedArray`` constructor "flag" keyword* ``numpy.ma.make_mask`` "flag" keyword* ``numpy.ma.allclose`` "fill_value" keyword
numpy.distutils¶Thenumpy.get_numpy_include function was removed, usenumpy.get_includeinstead.
This is the first NumPy release which is compatible with Python 3. Support forPython 3 and Python 2 is done from a single code base. Extensive notes onchanges can be found athttp://projects.scipy.org/numpy/browser/trunk/doc/Py3K.txt.
Note that the Numpy testing framework relies on nose, which does not have aPython 3 compatible release yet. A working Python 3 branch of nose can be foundathttp://bitbucket.org/jpellerin/nose3/ however.
Porting of SciPy to Python 3 is expected to be completed soon.
Numpy now emits anumpy.ComplexWarning when a complex number is castinto a real number. For example:
>>>x=np.array([1,2,3])>>>x[:2]=np.array([1+2j,1-2j])ComplexWarning: Casting complex values to real discards the imaginary part
The cast indeed discards the imaginary part, and this may not be theintended behavior in all cases, hence the warning. This warning can beturned off in the standard way:
>>>importwarnings>>>warnings.simplefilter("ignore",np.ComplexWarning)
Ndarrays now have the dot product also as a method, which allows writingchains of matrix products as
>>>a.dot(b).dot(c)
instead of the longer alternative
>>>np.dot(a,np.dot(b,c))
The slogdet function returns the sign and logarithm of the determinantof a matrix. Because the determinant may involve the product of manysmall/large values, the result is often more accurate than that obtainedby simple multiplication.
The new header file ndarraytypes.h contains the symbols fromndarrayobject.h that do not depend on the PY_ARRAY_UNIQUE_SYMBOL andNO_IMPORT/_ARRAY macros. Broadly, these symbols are types, typedefs,and enumerations; the array function calls are left inndarrayobject.h. This allows users to include array-related types andenumerations without needing to concern themselves with the macroexpansions and their side- effects.
After a two years transition period, the old behavior of the histogram functionhas been phased out, and the “new” keyword has been removed.
The old behavior of correlate was deprecated in 1.4.0, the new behavior (theusual definition for cross-correlation) is now the default.
This minor includes numerous bug fixes, as well as a few new features. Itis backward compatible with 1.3.0 release.
An __array_prepare__ method has been added to ndarray to provide subclassesgreater flexibility to interact with ufuncs and ufunc-like functions. ndarrayalready provided __array_wrap__, which allowed subclasses to set the array typefor the result and populate metadata on the way out of the ufunc (as seen inthe implementation of MaskedArray). For some applications it is necessary toprovide checks and populate metadataon the way in. __array_prepare__ istherefore called just after the ufunc has initialized the output array butbefore computing the results and populating it. This way, checks can be madeand errors raised before operations which may modify data in place.
Previously, if an extension was built against a version N of NumPy, and used ona system with NumPy M < N, the import_array was successful, which could causecrashes because the version M does not have a function in N. Starting fromNumPy 1.4.0, this will cause a failure in import_array, so the error will becaught early on.
A new neighborhood iterator has been added to the C API. It can be used toiterate over the items in a neighborhood of an array, and can handle boundariesconditions automatically. Zero and one padding are available, as well asarbitrary constant value, mirror and circular padding.
New modules chebyshev and polynomial have been added. The new polynomial moduleis not compatible with the current polynomial support in numpy, but is muchlike the new chebyshev module. The most noticeable difference to most willbe that coefficients are specified from low to high power, that the lowlevel functions donot work with the Chebyshev and Polynomial classes asarguments, and that the Chebyshev and Polynomial classes include a domain.Mapping between domains is a linear substitution and the two classes can beconverted one to the other, allowing, for instance, a Chebyshev series inone domain to be expanded as a polynomial in another domain. The new classesshould generally be used instead of the low level functions, the latter areprovided for those who wish to build their own classes.
The new modules are not automatically imported into the numpy namespace,they must be explicitly brought in with an “import numpy.polynomial”statement.
The following C functions have been added to the C API:
- PyArray_GetNDArrayCFeatureVersion: return theAPI version of theloaded numpy.
- PyArray_Correlate2 - like PyArray_Correlate, but implements the usualdefinition of correlation. Inputs are not swapped, and conjugate istaken for complex arrays.
- PyArray_NeighborhoodIterNew - a new iterator to iterate over aneighborhood of a point, with automatic boundaries handling. It isdocumented in the iterators section of the C-API reference, and you canfind some examples in the multiarray_test.c.src file in numpy.core.
The following ufuncs have been added to the C API:
- copysign - return the value of the first argument with the sign copiedfrom the second argument.
- nextafter - return the next representable floating point value of thefirst argument toward the second argument.
The alpha processor is now defined and available in numpy/npy_cpu.h. Thefailed detection of the PARISC processor has been fixed. The defines are:
- NPY_CPU_HPPA: PARISC
- NPY_CPU_ALPHA: Alpha
- deprecated decorator: this decorator may be used to avoid clutteringtesting output while testing DeprecationWarning is effectively raised bythe decorated test.
- assert_array_almost_equal_nulps: new method to compare two arrays offloating point values. With this function, two values are consideredclose if there are not many representable floating point values inbetween, thus being more robust than assert_array_almost_equal when thevalues fluctuate a lot.
- assert_array_max_ulp: raise an assertion if there are more than Nrepresentable numbers between two floating point values.
- assert_warns: raise an AssertionError if a callable does not generate awarning of the appropriate class, without altering the warning state.
In 1.3.0, we started putting portable C math routines in npymath library, sothat people can use those to write portable extensions. Unfortunately, it wasnot possible to easily link against this library: in 1.4.0, support has beenadded to numpy.distutils so that 3rd party can reuse this library. See coremathdocumentation for more information.
In previous versions of NumPy some set functions (intersect1d,setxor1d, setdiff1d and setmember1d) could return incorrect results ifthe input arrays contained duplicate items. These now work correctlyfor input arrays with duplicates. setmember1d has been renamed toin1d, as with the change to accept arrays with duplicates it isno longer a set operation, and is conceptually similar to anelementwise version of the Python operator ‘in’. All of thesefunctions now accept the boolean keyword assume_unique. This is Falseby default, but can be set True if the input arrays are known notto contain duplicates, which can increase the functions’ executionspeed.
numpy import is noticeably faster (from 20 to 30 % depending on theplatform and computer)
The sort functions now sort nans to the end.
- Real sort order is [R, nan]
- Complex sort order is [R + Rj, R + nanj, nan + Rj, nan + nanj]
Complex numbers with the same nan placements are sorted according tothe non-nan part if it exists.
The type comparison functions have been made consistent with the newsort order of nans. Searchsorted now works with sorted arrayscontaining nan values.
Complex division has been made more resistant to overflow.
Complex floor division has been made more resistant to overflow.
The following functions are deprecated:
- correlate: it takes a new keyword argument old_behavior. When True (thedefault), it returns the same result as before. When False, compute theconventional correlation, and take the conjugate for complex arrays. Theold behavior will be removed in NumPy 1.5, and raises aDeprecationWarning in 1.4.
- unique1d: use unique instead. unique1d raises a deprecationwarning in 1.4, and will be removed in 1.5.
- intersect1d_nu: use intersect1d instead. intersect1d_nu raisesa deprecation warning in 1.4, and will be removed in 1.5.
- setmember1d: use in1d instead. setmember1d raises a deprecationwarning in 1.4, and will be removed in 1.5.
The following raise errors:
- When operating on 0-d arrays,
numpy.maxand other functions acceptonlyaxis=0,axis=-1andaxis=None. Using an out-of-boundsaxes is an indication of a bug, so Numpy raises an error for these casesnow.- Specifying
axis>MAX_DIMSis no longer allowed; Numpy raises now anerror instead of behaving similarly as foraxis=None.
The numpy complex types are now guaranteed to be ABI compatible with C99complex type, if available on the platform. Moreover, the complex ufunc now usethe platform C99 functions instead of our own.
The source code of multiarray and umath has been split into separate logiccompilation units. This should make the source code more amenable fornewcomers.
By default, every file of multiarray (and umath) is merged into one forcompilation as was the case before, but if NPY_SEPARATE_COMPILATION envvariable is set to a non-negative value, experimental individual compilation ofeach file is enabled. This makes the compile/debug cycle much faster whenworking on core numpy.
New functions which have been added:
- npy_copysign
- npy_nextafter
- npy_cpack
- npy_creal
- npy_cimag
- npy_cabs
- npy_cexp
- npy_clog
- npy_cpow
- npy_csqr
- npy_ccos
- npy_csin
This minor includes numerous bug fixes, official python 2.6 support, andseveral new features such as generalized ufuncs.
Python 2.6 is now supported on all previously supported platforms, includingwindows.
There is a general need for looping over not only functions on scalars but alsoover functions on vectors (or arrays), as explained onhttp://scipy.org/scipy/numpy/wiki/GeneralLoopingFunctions. We propose torealize this concept by generalizing the universal functions (ufuncs), andprovide a C implementation that adds ~500 lines to the numpy code base. Incurrent (specialized) ufuncs, the elementary function is limited toelement-by-element operations, whereas the generalized version supports“sub-array” by “sub-array” operations. The Perl vector library PDL provides asimilar functionality and its terms are re-used in the following.
Each generalized ufunc has information associated with it that states what the“core” dimensionality of the inputs is, as well as the correspondingdimensionality of the outputs (the element-wise ufuncs have zero coredimensions). The list of the core dimensions for all arguments is called the“signature” of a ufunc. For example, the ufunc numpy.add has signature“(),()->()” defining two scalar inputs and one scalar output.
Another example is (see the GeneralLoopingFunctions page) the functioninner1d(a,b) with a signature of “(i),(i)->()”. This applies the inner productalong the last axis of each input, but keeps the remaining indices intact. Forexample, where a is of shape (3,5,N) and b is of shape (5,N), this will returnan output of shape (3,5). The underlying elementary function is called 3*5times. In the signature, we specify one core dimension “(i)” for each input andzero core dimensions “()” for the output, since it takes two 1-d arrays andreturns a scalar. By using the same name “i”, we specify that the twocorresponding dimensions should be of the same size (or one of them is of size1 and will be broadcasted).
The dimensions beyond the core dimensions are called “loop” dimensions. In theabove example, this corresponds to (3,5).
The usual numpy “broadcasting” rules apply, where the signature determines howthe dimensions of each input/output object are split into core and loopdimensions:
While an input array has a smaller dimensionality than the corresponding numberof core dimensions, 1’s are pre-pended to its shape. The core dimensions areremoved from all inputs and the remaining dimensions are broadcasted; definingthe loop dimensions. The output is given by the loop dimensions plus theoutput core dimensions.
Numpy can now be built on windows 64 bits (amd64 only, not IA64), with both MScompilers and mingw-w64 compilers:
This ishighly experimental: DO NOT USE FOR PRODUCTION USE. See INSTALL.txt,Windows 64 bits section for more information on limitations and how to build itby yourself.
Float formatting is now handled by numpy instead of the C runtime: this enableslocale independent formatting, more robust fromstring and related methods.Special values (inf and nan) are also more consistent across platforms (nan vsIND/NaN, etc…), and more consistent with recent python formatting work (in2.6 and later).
The maximum/minimum ufuncs now reliably propagate nans. If one of thearguments is a nan, then nan is returned. This affects np.min/np.max, amin/amaxand the array methods max/min. New ufuncs fmax and fmin have been added to dealwith non-propagating nans.
The ufunc sign now returns nan for the sign of anan.
Several new features and bug fixes, including:
- structured arrays should now be fully supported by MaskedArray(r6463, r6324, r6305, r6300, r6294…)
- Minor bug fixes (r6356, r6352, r6335, r6299, r6298)
- Improved support for __iter__ (r6326)
- made baseclass, sharedmask and hardmask accessible to the user (butread-only)
- doc update
Gfortran can now be used as a fortran compiler for numpy on windows, even whenthe C compiler is Visual Studio (VS 2005 and above; VS 2003 will NOT work).Gfortran + Visual studio does not work on windows 64 bits (but gcc + gfortrandoes). It is unclear whether it will be possible to use gfortran and visualstudio at all on x64.
Automatic arch detection can now be bypassed from the command line for the superpack installed:
numpy-1.3.0-superpack-win32.exe /arch=nosse
will install a numpy which works on any x86, even if the running computersupports SSE set.
The semantics of histogram has been modified to fix long-standing issueswith outliers handling. The main changes concern
The previous behavior is still accessible usingnew=False, but this isdeprecated, and will be removed entirely in 1.4.0.
A lot of documentation has been added. Both user guide and references can bebuilt from sphinx.
The following functions have been added to the multiarray C API:
- PyArray_GetEndianness: to get runtime endianness
The following functions have been added to the ufunc API:
- PyUFunc_FromFuncAndDataAndSignature: to declare a more general ufunc(generalized ufunc).
New public C defines are available for ARCH specific code through numpy/npy_cpu.h:
- NPY_CPU_X86: x86 arch (32 bits)
- NPY_CPU_AMD64: amd64 arch (x86_64, NOT Itanium)
- NPY_CPU_PPC: 32 bits ppc
- NPY_CPU_PPC64: 64 bits ppc
- NPY_CPU_SPARC: 32 bits sparc
- NPY_CPU_SPARC64: 64 bits sparc
- NPY_CPU_S390: S390
- NPY_CPU_IA64: ia64
- NPY_CPU_PARISC: PARISC
New macros for CPU endianness has been added as well (see internal changesbelow for details):
- NPY_BYTE_ORDER: integer
- NPY_LITTLE_ENDIAN/NPY_BIG_ENDIAN defines
Those provide portable alternatives to glibc endian.h macros for platformswithout it.
npy_math.h now makes available several portable macro to get NAN, INFINITY:
- NPY_NAN: equivalent to NAN, which is a GNU extension
- NPY_INFINITY: equivalent to C99 INFINITY
- NPY_PZERO, NPY_NZERO: positive and negative zero respectively
Corresponding single and extended precision macros are available as well. Allreferences to NAN, or home-grown computation of NAN on the fly have beenremoved for consistency.
This should make the porting to new platforms easier, and more robust. Inparticular, the configuration stage does not need to execute any code on thetarget platform, which is a first step toward cross-compilation.
A lot of code cleanup for umath/ufunc code (charris).
Numpy can now build with -W -Wall without warnings
The core math functions (sin, cos, etc… for basic C types) have been put intoa separate library; it acts as a compatibility layer, to support most C99 mathsfunctions (real only for now). The library includes platform-specific fixes forvarious maths functions, such as using those versions should be more robustthan using your platform functions directly. The API for existing functions isexactly the same as the C99 math functions API; the only difference is the npyprefix (npy_cos vs cos).
The core library will be made available to any extension in 1.4.0.
npy_cpu.h defines numpy specific CPU defines, such as NPY_CPU_X86, etc…Those are portable across OS and toolchains, and set up when the header isparsed, so that they can be safely used even in the case of cross-compilation(the values is not set when numpy is built), or for multi-arch binaries (e.g.fat binaries on Max OS X).
npy_endian.h defines numpy specific endianness defines, modeled on the glibcendian.h. NPY_BYTE_ORDER is equivalent to BYTE_ORDER, and one ofNPY_LITTLE_ENDIAN or NPY_BIG_ENDIAN is defined. As for CPU archs, those are setwhen the header is parsed by the compiler, and as such can be used forcross-compilation and multi-arch binaries.
NpzFile returned bynp.savez is now acollections.abc.Mappingnditer must be used in a context managerctypes with__array_interface__np.ma.notmasked_contiguous andnp.ma.flatnotmasked_contiguous always return listsnp.squeeze restores old behavior of objects that cannot handle anaxis argument.item method now returns a bytes objectcopy.copy andcopy.deepcopy no longer turnmasked into an arraynp.einsum updatesnp.ufunc.reduce and related functions now accept an initial valuenp.flip can operate over multiple axeshistogram andhistogramdd functions have moved tonp.lib.histogramshistogram will accept NaN values when explicit bins are givenhistogram works on datetime types, when explicit bin edges are givenhistogram “auto” estimator handles limited variance betterhistogramdd now match the data float typehistogramdd allows explicit ranges to be given in a subset of axeshistogramdd andhistogram2d have been renamednp.r_ works with 0d arrays, andnp.ma.mr_ works withnp.ma.maskednp.ptp accepts akeepdims argument, and extended axis tuplesMaskedArray.astype now is identical tondarray.astypenan_to_num always returns scalars when receiving scalar or 0d inputsnp.flatnonzero works on numpy-convertible typesnp.interp returns numpy scalars rather than builtin scalarsdtype=object, overriding the defaultboolsort functions acceptkind='stable'linalg.matrix_power can now handle stacks of matricesrandom.permutation for multidimensional arraysaxes,axis andkeepdims argumentsnp.take_along_axis andnp.put_along_axis functionsnp.ma.masked is no longer writeablenp.ma functions producing ``fill_value``s have changeda.flat.__array__() returns non-writeable arrays whena is non-contiguousnp.tensordot now returns zero array when contracting over 0-length dimensionnumpy.testing reorganizednp.asfarray no longer accepts non-dtypes through thedtype argumentnp.linalg.norm preserves float input types, even for arbitrary orderscount_nonzero(arr,axis=()) now counts over no axes, not all axes__init__.py files added to test directories.astype(bool) on unstructured void arrays now callsbool on each elementMaskedArray.squeeze never returnsnp.ma.maskedcan_cast fromfrom tofrom_isnat raisesTypeError when passed wrong typedtype.__getitem__ raisesTypeError when passed wrong type__str__ and__repr__nose plugins are usable bynumpy.testing.Testerparametrize decorator added tonumpy.testingchebinterpolate function added tonumpy.polynomial.chebyshevsign option added tonp.setprintoptions andnp.array2stringhermitian option added to``np.linalg.matrix_rank``threshold andedgeitems options added tonp.array2stringconcatenate andstack gained anout argumentrandom.noncentral_f need only be positive.np.einsum variationsf2py now handles arrays of dimension 0numpy.distutils supports using MSVC and mingw64-gfortran togethernp.linalg.pinv now works on stacked matricesnumpy.save aligns data to 64 bytes instead of 16decimal.Decimal innp.lib.financialvoid datatype elements are now printed in hex notationvoid datatypes is now independently customizablenp.loadtxtnp.set_string_functionstyle arg of array2string deprecatedRandomState using an array requires a 1-d arrayMaskedArray objects show a more usefulreprrepr ofnp.polynomial classes is more explicit__array_ufunc__ addedpositive ufuncdivmod ufuncnp.isnat ufunc tests for NaT special datetime and timedelta valuesnp.heaviside ufunc computes the Heaviside functionnp.block function for creating blocked arraysisin function, improving onin1daxes argument foruniquenp.gradient now supports unevenly spaced dataapply_along_axis.ndim property added todtype to complement.shapepackbits andunpackbitsndarray subclasseslinalg operations now accept empty vectors and matricesreduce ofnp.hypot.reduce andnp.logical_xor allowed in more casesrepr of object arraysargsort on masked arrays takes the same default arguments assortaverage now preserves subclassesarray==None andarray!=None do element-wise comparisonnp.equal,np.not_equal for object arrays ignores object identitynp.random.multivariate_normal behavior with bad covariance matrixassert_array_less comparesnp.inf and-np.inf nowassert_array_ and masked arraysassert_equal hide less warningsoffset attribute value inmemmap objectsnp.real andnp.imag return scalars for scalar inputspower and** raise errors for integer to negative integer powersnp.percentile ‘midpoint’ interpolation method fixed for exact indiceskeepdims kwarg is passed through to user-class methodsbitwise_and identity changedassert_almost_equalNoseTester behaviour of warnings during testingassert_warns anddeprecated decorator more specificas_stridedaxes keyword argument forrot90flipnumpy.distutilsnumpy/__init__.py to run distribution-specific checksnancumsum andnancumprod addednp.interp can now interpolate complex valuespolyvalfromroots addedgeomspace addedma.convolve andma.correlate addedfloat_power ufuncnp.loadtxt now supports a single integer asusecol argumenthistogramnp.roll can now roll multiple axes at the same time__complex__ method has been implemented for the ndarrayspathlib.Path objects now supportedbits attribute fornp.finfosignature argument tonp.vectorizebitwise_and identity changednp.einsumediff1d improved performance and subclass handlingndarray.mean for float16 arraysnp.gradient now supports anaxis argumentnp.lexsort now supports arrays with object data-typenp.ma.core.MaskedArray now supports anorder argumentndarray.tofile now uses fallocate on linuxA.T@A andA@A.Tnp.testing.assert_warns can now be used as a context manageraxis=0 raisesIndexErrormax_rows argumentfweights andaweights argumentsnorm to fft functionspad_width andconstant_valuesout argumentfloat.hex methodpromote_types and string dtypecan_cast and string dtypedoc/swig directory movednpy_3kcompat.h header changedsq_item andsq_ass_item sequence methodszeros_like for string dtypes now returns empty stringsnp.linspace andnp.logspacenp.triu andnp.tril broadcastingtobytes alias fortostring methodnumbers moduleincreasing parameter added tonp.vanderunique_counts parameter added tonp.uniquenp.crossnp.partitionnp.arraynp.searchsortednp.random.multivariate_normalCurrent steering council and institutional partners