numpy.histogram_bin_edges(a,bins=10,range=None,weights=None)[source]¶Function to calculate only the edges of the bins used by thehistogram function.
| Parameters: |
|
|---|---|
| Returns: |
|
See also
Notes
The methods to estimate the optimal number of bins are well foundedin literature, and are inspired by the choices R provides forhistogram visualisation. Note that having the number of binsproportional ton^{1/3} is asymptotically optimal, which iswhy it appears in most estimators. These are simply plug-in methodsthat give good starting points for number of bins. In the equationsbelow,h is the binwidth andn_h is the number ofbins. All estimators that compute bin counts are recast to bin widthusing theptp of the data. The final bin count is obtained fromnp.round(np.ceil(range/h)).
h = 2 \frac{IQR}{n^{1/3}}
The binwidth is proportional to the interquartile range (IQR)and inversely proportional to cube root of a.size. Can be tooconservative for small datasets, but is quite good for largedatasets. The IQR is very robust to outliers.
h = \sigma \sqrt[3]{\frac{24 * \sqrt{\pi}}{n}}
The binwidth is proportional to the standard deviation of thedata and inversely proportional to cube root ofx.size. Canbe too conservative for small datasets, but is quite good forlarge datasets. The standard deviation is not very robust tooutliers. Values are very similar to the Freedman-Diaconisestimator in the absence of outliers.
n_h = 2n^{1/3}
The number of bins is only proportional to cube root ofa.size. It tends to overestimate the number of bins and itdoes not take into account data variability.
n_h = \log _{2}n+1
The number of bins is the base 2 log ofa.size. Thisestimator assumes normality of data and is too conservative forlarger, non-normal datasets. This is the default method in R’shist method.
n_h = 1 + \log_{2}(n) + \log_{2}(1 + \frac{|g_1|}{\sigma_{g_1}})g_1 = mean[(\frac{x - \mu}{\sigma})^3]\sigma_{g_1} = \sqrt{\frac{6(n - 2)}{(n + 1)(n + 3)}}
An improved version of Sturges’ formula that produces betterestimates for non-normal datasets. This estimator attempts toaccount for the skew of the data.
n_h = \sqrt n
The simplest and fastest estimator. Only takes into account thedata size.
Examples
>>>arr=np.array([0,0,0,1,2,3,3,4,5])>>>np.histogram_bin_edges(arr,bins='auto',range=(0,1))array([0. , 0.25, 0.5 , 0.75, 1. ])>>>np.histogram_bin_edges(arr,bins=2)array([0. , 2.5, 5. ])
For consistency with histogram, an array of pre-computed bins ispassed through unmodified:
>>>np.histogram_bin_edges(arr,[1,2])array([1, 2])
This function allows one set of bins to be computed, and reused acrossmultiple histograms:
>>>shared_bins=np.histogram_bin_edges(arr,bins='auto')>>>shared_binsarray([0., 1., 2., 3., 4., 5.])
>>>group_id=np.array([0,1,1,0,1,1,0,1,1])>>>hist_0,_=np.histogram(arr[group_id==0],bins=shared_bins)>>>hist_1,_=np.histogram(arr[group_id==1],bins=shared_bins)
>>>hist_0;hist_1array([1, 1, 0, 1, 0])array([2, 0, 1, 1, 2])
Which gives more easily comparable results than using separate bins foreach histogram:
>>>hist_0,bins_0=np.histogram(arr[group_id==0],bins='auto')>>>hist_1,bins_1=np.histogram(arr[group_id==1],bins='auto')>>>hist_0;hist1array([1, 1, 1])array([2, 1, 1, 2])>>>bins_0;bins_1array([0., 1., 2., 3.])array([0. , 1.25, 2.5 , 3.75, 5. ])