- User Guide
- Windowing operations
Windowing operations#
pandas contains a compact set of APIs for performing windowing operations - an operation that performsan aggregation over a sliding partition of values. The API functions similarly to thegroupby
APIin thatSeries
andDataFrame
call the windowing method withnecessary parameters and then subsequently call the aggregation function.
In [1]:s=pd.Series(range(5))In [2]:s.rolling(window=2).sum()Out[2]:0 NaN1 1.02 3.03 5.04 7.0dtype: float64
The windows are comprised by looking back the length of the window from the current observation.The result above can be derived by taking the sum of the following windowed partitions of data:
In [3]:forwindowins.rolling(window=2): ...:print(window) ...:0 0dtype: int640 01 1dtype: int641 12 2dtype: int642 23 3dtype: int643 34 4dtype: int64
Overview#
pandas supports 4 types of windowing operations:
Rolling window: Generic fixed or variable sliding window over the values.
Weighted window: Weighted, non-rectangular window supplied by the
scipy.signal
library.Expanding window: Accumulating window over the values.
Exponentially Weighted window: Accumulating and exponentially weighted window over the values.
Concept | Method | Returned Object | Supports time-based windows | Supports chained groupby | Supports table method | Supports online operations |
---|---|---|---|---|---|---|
Rolling window |
|
| Yes | Yes | Yes (as of version 1.3) | No |
Weighted window |
|
| No | No | No | No |
Expanding window |
|
| No | Yes | Yes (as of version 1.3) | No |
Exponentially Weighted window |
|
| No | Yes (as of version 1.2) | No | Yes (as of version 1.3) |
As noted above, some operations support specifying a window based on a time offset:
In [4]:s=pd.Series(range(5),index=pd.date_range('2020-01-01',periods=5,freq='1D'))In [5]:s.rolling(window='2D').sum()Out[5]:2020-01-01 0.02020-01-02 1.02020-01-03 3.02020-01-04 5.02020-01-05 7.0Freq: D, dtype: float64
Additionally, some methods support chaining agroupby
operation with a windowing operationwhich will first group the data by the specified keys and then perform a windowing operation per group.
In [6]:df=pd.DataFrame({'A':['a','b','a','b','a'],'B':range(5)})In [7]:df.groupby('A').expanding().sum()Out[7]: BAa 0 0.0 2 2.0 4 6.0b 1 1.0 3 4.0
Note
Windowing operations currently only support numeric data (integer and float)and will always returnfloat64
values.
Warning
Some windowing aggregation,mean
,sum
,var
andstd
methods may suffer from numericalimprecision due to the underlying windowing algorithms accumulating sums. When values differwith magnitude\(1/np.finfo(np.double).eps\) this results in truncation. It must benoted, that large values may have an impact on windows, which do not include these values.Kahan summation is usedto compute the rolling sums to preserve accuracy as much as possible.
Added in version 1.3.0.
Some windowing operations also support themethod='table'
option in the constructor whichperforms the windowing operation over an entireDataFrame
instead of a single column or row at a time.This can provide a useful performance benefit for aDataFrame
with many columns or rows(with the correspondingaxis
argument) or the ability to utilize other columns during the windowingoperation. Themethod='table'
option can only be used ifengine='numba'
is specifiedin the corresponding method call.
For example, aweighted mean calculation canbe calculated withapply()
by specifying a separate column of weights.
In [8]:defweighted_mean(x): ...:arr=np.ones((1,x.shape[1])) ...:arr[:,:2]=(x[:,:2]*x[:,2]).sum(axis=0)/x[:,2].sum() ...:returnarr ...:In [9]:df=pd.DataFrame([[1,2,0.6],[2,3,0.4],[3,4,0.2],[4,5,0.7]])In [10]:df.rolling(2,method="table",min_periods=0).apply(weighted_mean,raw=True,engine="numba")# noqa: E501Out[10]: 0 1 20 1.000000 2.000000 1.01 1.800000 2.000000 1.02 3.333333 2.333333 1.03 1.555556 7.000000 1.0
Added in version 1.3.
Some windowing operations also support anonline
method after constructing a windowing objectwhich returns a new object that supports passing in newDataFrame
orSeries
objectsto continue the windowing calculation with the new values (i.e. online calculations).
The methods on this new windowing objects must call the aggregation method first to “prime” the initialstate of the online calculation. Then, newDataFrame
orSeries
objects can be passed intheupdate
argument to continue the windowing calculation.
In [11]:df=pd.DataFrame([[1,2,0.6],[2,3,0.4],[3,4,0.2],[4,5,0.7]])In [12]:df.ewm(0.5).mean()Out[12]: 0 1 20 1.000000 2.000000 0.6000001 1.750000 2.750000 0.4500002 2.615385 3.615385 0.2769233 3.550000 4.550000 0.562500
In [13]:online_ewm=df.head(2).ewm(0.5).online()In [14]:online_ewm.mean()Out[14]: 0 1 20 1.00 2.00 0.601 1.75 2.75 0.45In [15]:online_ewm.mean(update=df.tail(1))Out[15]: 0 1 23 3.307692 4.307692 0.623077
All windowing operations support amin_periods
argument that dictates the minimum amount ofnon-np.nan
values a window must have; otherwise, the resulting value isnp.nan
.min_periods
defaults to 1 for time-based windows andwindow
for fixed windows
In [16]:s=pd.Series([np.nan,1,2,np.nan,np.nan,3])In [17]:s.rolling(window=3,min_periods=1).sum()Out[17]:0 NaN1 1.02 3.03 3.04 2.05 3.0dtype: float64In [18]:s.rolling(window=3,min_periods=2).sum()Out[18]:0 NaN1 NaN2 3.03 3.04 NaN5 NaNdtype: float64# Equivalent to min_periods=3In [19]:s.rolling(window=3,min_periods=None).sum()Out[19]:0 NaN1 NaN2 NaN3 NaN4 NaN5 NaNdtype: float64
Additionally, all windowing operations supports theaggregate
method for returning a resultof multiple aggregations applied to a window.
In [20]:df=pd.DataFrame({"A":range(5),"B":range(10,15)})In [21]:df.expanding().agg(["sum","mean","std"])Out[21]: A B sum mean std sum mean std0 0.0 0.0 NaN 10.0 10.0 NaN1 1.0 0.5 0.707107 21.0 10.5 0.7071072 3.0 1.0 1.000000 33.0 11.0 1.0000003 6.0 1.5 1.290994 46.0 11.5 1.2909944 10.0 2.0 1.581139 60.0 12.0 1.581139
Rolling window#
Generic rolling windows support specifying windows as a fixed number of observations or variablenumber of observations based on an offset. If a time based offset is provided, the correspondingtime based index must be monotonic.
In [22]:times=['2020-01-01','2020-01-03','2020-01-04','2020-01-05','2020-01-29']In [23]:s=pd.Series(range(5),index=pd.DatetimeIndex(times))In [24]:sOut[24]:2020-01-01 02020-01-03 12020-01-04 22020-01-05 32020-01-29 4dtype: int64# Window with 2 observationsIn [25]:s.rolling(window=2).sum()Out[25]:2020-01-01 NaN2020-01-03 1.02020-01-04 3.02020-01-05 5.02020-01-29 7.0dtype: float64# Window with 2 days worth of observationsIn [26]:s.rolling(window='2D').sum()Out[26]:2020-01-01 0.02020-01-03 1.02020-01-04 3.02020-01-05 5.02020-01-29 4.0dtype: float64
For all supported aggregation functions, seeRolling window functions.
Centering windows#
By default the labels are set to the right edge of the window, but acenter
keyword is available so the labels can be set at the center.
In [27]:s=pd.Series(range(10))In [28]:s.rolling(window=5).mean()Out[28]:0 NaN1 NaN2 NaN3 NaN4 2.05 3.06 4.07 5.08 6.09 7.0dtype: float64In [29]:s.rolling(window=5,center=True).mean()Out[29]:0 NaN1 NaN2 2.03 3.04 4.05 5.06 6.07 7.08 NaN9 NaNdtype: float64
This can also be applied to datetime-like indices.
Added in version 1.3.0.
In [30]:df=pd.DataFrame( ....:{"A":[0,1,2,3,4]},index=pd.date_range("2020",periods=5,freq="1D") ....:) ....:In [31]:dfOut[31]: A2020-01-01 02020-01-02 12020-01-03 22020-01-04 32020-01-05 4In [32]:df.rolling("2D",center=False).mean()Out[32]: A2020-01-01 0.02020-01-02 0.52020-01-03 1.52020-01-04 2.52020-01-05 3.5In [33]:df.rolling("2D",center=True).mean()Out[33]: A2020-01-01 0.52020-01-02 1.52020-01-03 2.52020-01-04 3.52020-01-05 4.0
Rolling window endpoints#
The inclusion of the interval endpoints in rolling window calculations can be specified with theclosed
parameter:
Value | Behavior |
---|---|
| close right endpoint |
| close left endpoint |
| close both endpoints |
| open endpoints |
For example, having the right endpoint open is useful in many problems that require that there is no contaminationfrom present information back to past information. This allows the rolling window to compute statistics“up to that point in time”, but not including that point in time.
In [34]:df=pd.DataFrame( ....:{"x":1}, ....:index=[ ....:pd.Timestamp("20130101 09:00:01"), ....:pd.Timestamp("20130101 09:00:02"), ....:pd.Timestamp("20130101 09:00:03"), ....:pd.Timestamp("20130101 09:00:04"), ....:pd.Timestamp("20130101 09:00:06"), ....:], ....:) ....:In [35]:df["right"]=df.rolling("2s",closed="right").x.sum()# defaultIn [36]:df["both"]=df.rolling("2s",closed="both").x.sum()In [37]:df["left"]=df.rolling("2s",closed="left").x.sum()In [38]:df["neither"]=df.rolling("2s",closed="neither").x.sum()In [39]:dfOut[39]: x right both left neither2013-01-01 09:00:01 1 1.0 1.0 NaN NaN2013-01-01 09:00:02 1 2.0 2.0 1.0 1.02013-01-01 09:00:03 1 2.0 3.0 2.0 1.02013-01-01 09:00:04 1 2.0 3.0 2.0 1.02013-01-01 09:00:06 1 1.0 2.0 1.0 NaN
Custom window rolling#
In addition to accepting an integer or offset as awindow
argument,rolling
also acceptsaBaseIndexer
subclass that allows a user to define a custom method for calculating window bounds.TheBaseIndexer
subclass will need to define aget_window_bounds
method that returnsa tuple of two arrays, the first being the starting indices of the windows and second being theending indices of the windows. Additionally,num_values
,min_periods
,center
,closed
andstep
will automatically be passed toget_window_bounds
and the defined method mustalways accept these arguments.
For example, if we have the followingDataFrame
In [40]:use_expanding=[True,False,True,False,True]In [41]:use_expandingOut[41]:[True, False, True, False, True]In [42]:df=pd.DataFrame({"values":range(5)})In [43]:dfOut[43]: values0 01 12 23 34 4
and we want to use an expanding window whereuse_expanding
isTrue
otherwise a window of size1, we can create the followingBaseIndexer
subclass:
In [44]:frompandas.api.indexersimportBaseIndexerIn [45]:classCustomIndexer(BaseIndexer): ....:defget_window_bounds(self,num_values,min_periods,center,closed,step): ....:start=np.empty(num_values,dtype=np.int64) ....:end=np.empty(num_values,dtype=np.int64) ....:foriinrange(num_values): ....:ifself.use_expanding[i]: ....:start[i]=0 ....:end[i]=i+1 ....:else: ....:start[i]=i ....:end[i]=i+self.window_size ....:returnstart,end ....:In [46]:indexer=CustomIndexer(window_size=1,use_expanding=use_expanding)In [47]:df.rolling(indexer).sum()Out[47]: values0 0.01 1.02 3.03 3.04 10.0
You can view other examples ofBaseIndexer
subclasseshere
One subclass of note within those examples is theVariableOffsetWindowIndexer
that allowsrolling operations over a non-fixed offset like aBusinessDay
.
In [48]:frompandas.api.indexersimportVariableOffsetWindowIndexerIn [49]:df=pd.DataFrame(range(10),index=pd.date_range("2020",periods=10))In [50]:offset=pd.offsets.BDay(1)In [51]:indexer=VariableOffsetWindowIndexer(index=df.index,offset=offset)In [52]:dfOut[52]: 02020-01-01 02020-01-02 12020-01-03 22020-01-04 32020-01-05 42020-01-06 52020-01-07 62020-01-08 72020-01-09 82020-01-10 9In [53]:df.rolling(indexer).sum()Out[53]: 02020-01-01 0.02020-01-02 1.02020-01-03 2.02020-01-04 3.02020-01-05 7.02020-01-06 12.02020-01-07 6.02020-01-08 7.02020-01-09 8.02020-01-10 9.0
For some problems knowledge of the future is available for analysis. For example, this occurs wheneach data point is a full time series read from an experiment, and the task is to extract underlyingconditions. In these cases it can be useful to perform forward-looking rolling window computations.FixedForwardWindowIndexer
class is available for this purpose.ThisBaseIndexer
subclass implements a closed fixed-widthforward-looking rolling window, and we can use it as follows:
In [54]:frompandas.api.indexersimportFixedForwardWindowIndexerIn [55]:indexer=FixedForwardWindowIndexer(window_size=2)In [56]:df.rolling(indexer,min_periods=1).sum()Out[56]: 02020-01-01 1.02020-01-02 3.02020-01-03 5.02020-01-04 7.02020-01-05 9.02020-01-06 11.02020-01-07 13.02020-01-08 15.02020-01-09 17.02020-01-10 9.0
We can also achieve this by using slicing, applying rolling aggregation, and then flipping the result as shown in example below:
In [57]:df=pd.DataFrame( ....:data=[ ....:[pd.Timestamp("2018-01-01 00:00:00"),100], ....:[pd.Timestamp("2018-01-01 00:00:01"),101], ....:[pd.Timestamp("2018-01-01 00:00:03"),103], ....:[pd.Timestamp("2018-01-01 00:00:04"),111], ....:], ....:columns=["time","value"], ....:).set_index("time") ....:In [58]:dfOut[58]: valuetime2018-01-01 00:00:00 1002018-01-01 00:00:01 1012018-01-01 00:00:03 1032018-01-01 00:00:04 111In [59]:reversed_df=df[::-1].rolling("2s").sum()[::-1]In [60]:reversed_dfOut[60]: valuetime2018-01-01 00:00:00 201.02018-01-01 00:00:01 101.02018-01-01 00:00:03 214.02018-01-01 00:00:04 111.0
Rolling apply#
Theapply()
function takes an extrafunc
argument and performsgeneric rolling computations. Thefunc
argument should be a single functionthat produces a single value from an ndarray input.raw
specifies whetherthe windows are cast asSeries
objects (raw=False
) or ndarray objects (raw=True
).
In [61]:defmad(x): ....:returnnp.fabs(x-x.mean()).mean() ....:In [62]:s=pd.Series(range(10))In [63]:s.rolling(window=4).apply(mad,raw=True)Out[63]:0 NaN1 NaN2 NaN3 1.04 1.05 1.06 1.07 1.08 1.09 1.0dtype: float64
Numba engine#
Additionally,apply()
can leverageNumbaif installed as an optional dependency. The apply aggregation can be executed using Numba by specifyingengine='numba'
andengine_kwargs
arguments (raw
must also be set toTrue
).Seeenhancing performance with Numba for general usage of the arguments and performance considerations.
Numba will be applied in potentially two routines:
If
func
is a standard Python function, the engine willJIT the passed function.func
can also be a JITed function in which case the engine will not JIT the function again.The engine will JIT the for loop where the apply function is applied to each window.
Theengine_kwargs
argument is a dictionary of keyword arguments that will be passed into thenumba.jit decorator.These keyword arguments will be applied toboth the passed function (if a standard Python function)and the apply for loop over each window.
Added in version 1.3.0.
mean
,median
,max
,min
, andsum
also support theengine
andengine_kwargs
arguments.
Binary window functions#
cov()
andcorr()
can compute moving window statistics abouttwoSeries
or any combination ofDataFrame
/Series
orDataFrame
/DataFrame
. Here is the behavior in each case:
two
Series
: compute the statistic for the pairing.DataFrame
/Series
: compute the statistics for each column of the DataFramewith the passed Series, thus returning a DataFrame.DataFrame
/DataFrame
: by default compute the statistic for matching columnnames, returning a DataFrame. If the keyword argumentpairwise=True
ispassed then computes the statistic for each pair of columns, returning aDataFrame
with aMultiIndex
whose values are the dates in question (seethe next section).
For example:
In [64]:df=pd.DataFrame( ....:np.random.randn(10,4), ....:index=pd.date_range("2020-01-01",periods=10), ....:columns=["A","B","C","D"], ....:) ....:In [65]:df=df.cumsum()In [66]:df2=df[:4]In [67]:df2.rolling(window=2).corr(df2["B"])Out[67]: A B C D2020-01-01 NaN NaN NaN NaN2020-01-02 -1.0 1.0 -1.0 1.02020-01-03 1.0 1.0 1.0 -1.02020-01-04 -1.0 1.0 1.0 -1.0
Computing rolling pairwise covariances and correlations#
In financial data analysis and other fields it’s common to compute covarianceand correlation matrices for a collection of time series. Often one is alsointerested in moving-window covariance and correlation matrices. This can bedone by passing thepairwise
keyword argument, which in the case ofDataFrame
inputs will yield a MultiIndexedDataFrame
whoseindex
are the dates inquestion. In the case of a single DataFrame argument thepairwise
argumentcan even be omitted:
Note
Missing values are ignored and each entry is computed using the pairwisecomplete observations.
Assuming the missing data are missing at random this results in an estimatefor the covariance matrix which is unbiased. However, for many applicationsthis estimate may not be acceptable because the estimated covariance matrixis not guaranteed to be positive semi-definite. This could lead toestimated correlations having absolute values which are greater than one,and/or a non-invertible covariance matrix. SeeEstimation of covariancematricesfor more details.
In [68]:covs=( ....:df[["B","C","D"]] ....:.rolling(window=4) ....:.cov(df[["A","B","C"]],pairwise=True) ....:) ....:In [69]:covsOut[69]: B C D2020-01-01 A NaN NaN NaN B NaN NaN NaN C NaN NaN NaN2020-01-02 A NaN NaN NaN B NaN NaN NaN... ... ... ...2020-01-09 B 0.342006 0.230190 0.052849 C 0.230190 1.575251 0.0829012020-01-10 A -0.333945 0.006871 -0.655514 B 0.649711 0.430860 0.469271 C 0.430860 0.829721 0.055300[30 rows x 3 columns]
Weighted window#
Thewin_type
argument in.rolling
generates a weighted windows that are commonly used in filteringand spectral estimation.win_type
must be string that corresponds to ascipy.signal window function.Scipy must be installed in order to use these windows, and supplementary argumentsthat the Scipy window methods take must be specified in the aggregation function.
In [70]:s=pd.Series(range(10))In [71]:s.rolling(window=5).mean()Out[71]:0 NaN1 NaN2 NaN3 NaN4 2.05 3.06 4.07 5.08 6.09 7.0dtype: float64In [72]:s.rolling(window=5,win_type="triang").mean()Out[72]:0 NaN1 NaN2 NaN3 NaN4 2.05 3.06 4.07 5.08 6.09 7.0dtype: float64# Supplementary Scipy arguments passed in the aggregation functionIn [73]:s.rolling(window=5,win_type="gaussian").mean(std=0.1)Out[73]:0 NaN1 NaN2 NaN3 NaN4 2.05 3.06 4.07 5.08 6.09 7.0dtype: float64
For all supported aggregation functions, seeWeighted window functions.
Expanding window#
An expanding window yields the value of an aggregation statistic with all the data available up to thatpoint in time. Since these calculations are a special case of rolling statistics,they are implemented in pandas such that the following two calls are equivalent:
In [74]:df=pd.DataFrame(range(5))In [75]:df.rolling(window=len(df),min_periods=1).mean()Out[75]: 00 0.01 0.52 1.03 1.54 2.0In [76]:df.expanding(min_periods=1).mean()Out[76]: 00 0.01 0.52 1.03 1.54 2.0
For all supported aggregation functions, seeExpanding window functions.
Exponentially weighted window#
An exponentially weighted window is similar to an expanding window but with each prior pointbeing exponentially weighted down relative to the current point.
In general, a weighted moving average is calculated as
where\(x_t\) is the input,\(y_t\) is the result and the\(w_i\)are the weights.
For all supported aggregation functions, seeExponentially-weighted window functions.
The EW functions support two variants of exponential weights.The default,adjust=True
, uses the weights\(w_i = (1 - \alpha)^i\)which gives
Whenadjust=False
is specified, moving averages are calculated as
which is equivalent to using weights
Note
These equations are sometimes written in terms of\(\alpha' = 1 - \alpha\), e.g.
The difference between the above two variants arises because we aredealing with series which have finite history. Consider a series of infinitehistory, withadjust=True
:
Noting that the denominator is a geometric series with initial term equal to 1and a ratio of\(1 - \alpha\) we have
which is the same expression asadjust=False
above and thereforeshows the equivalence of the two variants for infinite series.Whenadjust=False
, we have\(y_0 = x_0\) and\(y_t = \alpha x_t + (1 - \alpha) y_{t-1}\).Therefore, there is an assumption that\(x_0\) is not an ordinary valuebut rather an exponentially weighted moment of the infinite series up to thatpoint.
One must have\(0 < \alpha \leq 1\), and while it is possible to pass\(\alpha\) directly, it’s often easier to think about either thespan,center of mass (com) orhalf-life of an EW moment:
One must specify precisely one ofspan,center of mass,half-lifeandalpha to the EW functions:
Span corresponds to what is commonly called an “N-day EW moving average”.
Center of mass has a more physical interpretation and can be thought ofin terms of span:\(c = (s - 1) / 2\).
Half-life is the period of time for the exponential weight to reduce toone half.
Alpha specifies the smoothing factor directly.
You can also specifyhalflife
in terms of a timedelta convertible unit to specify the amount oftime it takes for an observation to decay to half its value when also specifying a sequenceoftimes
.
In [77]:df=pd.DataFrame({"B":[0,1,2,np.nan,4]})In [78]:dfOut[78]: B0 0.01 1.02 2.03 NaN4 4.0In [79]:times=["2020-01-01","2020-01-03","2020-01-10","2020-01-15","2020-01-17"]In [80]:df.ewm(halflife="4 days",times=pd.DatetimeIndex(times)).mean()Out[80]: B0 0.0000001 0.5857862 1.5238893 1.5238894 3.233686
The following formula is used to compute exponentially weighted mean with an input vector of times:
ExponentialMovingWindow also has anignore_na
argument, which determines howintermediate null values affect the calculation of the weights.Whenignore_na=False
(the default), weights are calculated based on absolutepositions, so that intermediate null values affect the result.Whenignore_na=True
,weights are calculated by ignoring intermediate null values.For example, assumingadjust=True
, ifignore_na=False
, the weightedaverage of3,NaN,5
would be calculated as
Whereas ifignore_na=True
, the weighted average would be calculated as
Thevar()
,std()
, andcov()
functions have abias
argument,specifying whether the result should contain biased or unbiased statistics.For example, ifbias=True
,ewmvar(x)
is calculated asewmvar(x)=ewma(x**2)-ewma(x)**2
;whereas ifbias=False
(the default), the biased variance statisticsare scaled by debiasing factors
(For\(w_i = 1\), this reduces to the usual\(N / (N - 1)\) factor,with\(N = t + 1\).)SeeWeighted Sample Varianceon Wikipedia for further details.