- User Guide
- Essential...
Essential basic functionality#
Here we discuss a lot of the essential functionality common to the pandas datastructures. To begin, let’s create some example objects like we did inthe10 minutes to pandas section:
In [1]:index=pd.date_range("1/1/2000",periods=8)In [2]:s=pd.Series(np.random.randn(5),index=["a","b","c","d","e"])In [3]:df=pd.DataFrame(np.random.randn(8,3),index=index,columns=["A","B","C"])
Head and tail#
To view a small sample of a Series or DataFrame object, use thehead()
andtail()
methods. The default numberof elements to display is five, but you may pass a custom number.
In [4]:long_series=pd.Series(np.random.randn(1000))In [5]:long_series.head()Out[5]:0 -1.1578921 -1.3443122 0.8448853 1.0757704 -0.109050dtype: float64In [6]:long_series.tail(3)Out[6]:997 -0.289388998 -1.020544999 0.589993dtype: float64
Attributes and underlying data#
pandas objects have a number of attributes enabling you to access the metadata
shape: gives the axis dimensions of the object, consistent with ndarray
- Axis labels
Series:index (only axis)
DataFrame:index (rows) andcolumns
Note,these attributes can be safely assigned to!
In [7]:df[:2]Out[7]: A B C2000-01-01 -0.173215 0.119209 -1.0442362000-01-02 -0.861849 -2.104569 -0.494929In [8]:df.columns=[x.lower()forxindf.columns]In [9]:dfOut[9]: a b c2000-01-01 -0.173215 0.119209 -1.0442362000-01-02 -0.861849 -2.104569 -0.4949292000-01-03 1.071804 0.721555 -0.7067712000-01-04 -1.039575 0.271860 -0.4249722000-01-05 0.567020 0.276232 -1.0874012000-01-06 -0.673690 0.113648 -1.4784272000-01-07 0.524988 0.404705 0.5770462000-01-08 -1.715002 -1.039268 -0.370647
pandas objects (Index
,Series
,DataFrame
) can bethought of as containers for arrays, which hold the actual data and do theactual computation. For many types, the underlying array is anumpy.ndarray
. However, pandas and 3rd party libraries mayextendNumPy’s type system to add support for custom arrays(seedtypes).
To get the actual data inside aIndex
orSeries
, usethe.array
property
In [10]:s.arrayOut[10]:<NumpyExtensionArray>[ 0.4691122999071863, -0.2828633443286633, -1.5090585031735124, -1.1356323710171934, 1.2121120250208506]Length: 5, dtype: float64In [11]:s.index.arrayOut[11]:<NumpyExtensionArray>['a', 'b', 'c', 'd', 'e']Length: 5, dtype: object
array
will always be anExtensionArray
.The exact details of what anExtensionArray
is and why pandas uses them are a bitbeyond the scope of this introduction. Seedtypes for more.
If you know you need a NumPy array, useto_numpy()
ornumpy.asarray()
.
In [12]:s.to_numpy()Out[12]:array([ 0.4691, -0.2829, -1.5091, -1.1356, 1.2121])In [13]:np.asarray(s)Out[13]:array([ 0.4691, -0.2829, -1.5091, -1.1356, 1.2121])
When the Series or Index is backed byanExtensionArray
,to_numpy()
may involve copying data and coercing values. Seedtypes for more.
to_numpy()
gives some control over thedtype
of theresultingnumpy.ndarray
. For example, consider datetimes with timezones.NumPy doesn’t have a dtype to represent timezone-aware datetimes, so thereare two possibly useful representations:
An object-dtype
numpy.ndarray
withTimestamp
objects, eachwith the correcttz
A
datetime64[ns]
-dtypenumpy.ndarray
, where the values havebeen converted to UTC and the timezone discarded
Timezones may be preserved withdtype=object
In [14]:ser=pd.Series(pd.date_range("2000",periods=2,tz="CET"))In [15]:ser.to_numpy(dtype=object)Out[15]:array([Timestamp('2000-01-01 00:00:00+0100', tz='CET'), Timestamp('2000-01-02 00:00:00+0100', tz='CET')], dtype=object)
Or thrown away withdtype='datetime64[ns]'
In [16]:ser.to_numpy(dtype="datetime64[ns]")Out[16]:array(['1999-12-31T23:00:00.000000000', '2000-01-01T23:00:00.000000000'], dtype='datetime64[ns]')
Getting the “raw data” inside aDataFrame
is possibly a bit morecomplex. When yourDataFrame
only has a single data type for all thecolumns,DataFrame.to_numpy()
will return the underlying data:
In [17]:df.to_numpy()Out[17]:array([[-0.1732, 0.1192, -1.0442], [-0.8618, -2.1046, -0.4949], [ 1.0718, 0.7216, -0.7068], [-1.0396, 0.2719, -0.425 ], [ 0.567 , 0.2762, -1.0874], [-0.6737, 0.1136, -1.4784], [ 0.525 , 0.4047, 0.577 ], [-1.715 , -1.0393, -0.3706]])
If a DataFrame contains homogeneously-typed data, the ndarray canactually be modified in-place, and the changes will be reflected in the datastructure. For heterogeneous data (e.g. some of the DataFrame’s columns are notall the same dtype), this will not be the case. The values attribute itself,unlike the axis labels, cannot be assigned to.
Note
When working with heterogeneous data, the dtype of the resulting ndarraywill be chosen to accommodate all of the data involved. For example, ifstrings are involved, the result will be of object dtype. If there are onlyfloats and integers, the resulting array will be of float dtype.
In the past, pandas recommendedSeries.values
orDataFrame.values
for extracting the data from a Series or DataFrame. You’ll still find referencesto these in old code bases and online. Going forward, we recommend avoiding.values
and using.array
or.to_numpy()
..values
has the followingdrawbacks:
When your Series contains anextension type, it’sunclear whether
Series.values
returns a NumPy array or the extension array.Series.array
will always return anExtensionArray
, and will nevercopy data.Series.to_numpy()
will always return a NumPy array,potentially at the cost of copying / coercing values.When your DataFrame contains a mixture of data types,
DataFrame.values
mayinvolve copying data and coercing values to a common dtype, a relatively expensiveoperation.DataFrame.to_numpy()
, being a method, makes it clearer that thereturned NumPy array may not be a view on the same data in the DataFrame.
Accelerated operations#
pandas has support for accelerating certain types of binary numerical and boolean operations usingthenumexpr
library and thebottleneck
libraries.
These libraries are especially useful when dealing with large data sets, and provide largespeedups.numexpr
uses smart chunking, caching, and multiple cores.bottleneck
isa set of specialized cython routines that are especially fast when dealing with arrays that havenans
.
Here is a sample (using 100 column x 100,000 rowDataFrames
):
Operation | 0.11.0 (ms) | Prior Version (ms) | Ratio to Prior |
---|---|---|---|
| 13.32 | 125.35 | 0.1063 |
| 21.71 | 36.63 | 0.5928 |
| 22.04 | 36.50 | 0.6039 |
You are highly encouraged to install both libraries. See the sectionRecommended Dependencies for more installation info.
These are both enabled to be used by default, you can control this by setting the options:
pd.set_option("compute.use_bottleneck",False)pd.set_option("compute.use_numexpr",False)
Flexible binary operations#
With binary operations between pandas data structures, there are two key pointsof interest:
Broadcasting behavior between higher- (e.g. DataFrame) andlower-dimensional (e.g. Series) objects.
Missing data in computations.
We will demonstrate how to manage these issues independently, though they canbe handled simultaneously.
Matching / broadcasting behavior#
DataFrame has the methodsadd()
,sub()
,mul()
,div()
and related functionsradd()
,rsub()
, …for carrying out binary operations. For broadcasting behavior,Series input is of primary interest. Using these functions, you can use toeither match on theindex orcolumns via theaxis keyword:
In [18]:df=pd.DataFrame( ....:{ ....:"one":pd.Series(np.random.randn(3),index=["a","b","c"]), ....:"two":pd.Series(np.random.randn(4),index=["a","b","c","d"]), ....:"three":pd.Series(np.random.randn(3),index=["b","c","d"]), ....:} ....:) ....:In [19]:dfOut[19]: one two threea 1.394981 1.772517 NaNb 0.343054 1.912123 -0.050390c 0.695246 1.478369 1.227435d NaN 0.279344 -0.613172In [20]:row=df.iloc[1]In [21]:column=df["two"]In [22]:df.sub(row,axis="columns")Out[22]: one two threea 1.051928 -0.139606 NaNb 0.000000 0.000000 0.000000c 0.352192 -0.433754 1.277825d NaN -1.632779 -0.562782In [23]:df.sub(row,axis=1)Out[23]: one two threea 1.051928 -0.139606 NaNb 0.000000 0.000000 0.000000c 0.352192 -0.433754 1.277825d NaN -1.632779 -0.562782In [24]:df.sub(column,axis="index")Out[24]: one two threea -0.377535 0.0 NaNb -1.569069 0.0 -1.962513c -0.783123 0.0 -0.250933d NaN 0.0 -0.892516In [25]:df.sub(column,axis=0)Out[25]: one two threea -0.377535 0.0 NaNb -1.569069 0.0 -1.962513c -0.783123 0.0 -0.250933d NaN 0.0 -0.892516
Furthermore you can align a level of a MultiIndexed DataFrame with a Series.
In [26]:dfmi=df.copy()In [27]:dfmi.index=pd.MultiIndex.from_tuples( ....:[(1,"a"),(1,"b"),(1,"c"),(2,"a")],names=["first","second"] ....:) ....:In [28]:dfmi.sub(column,axis=0,level="second")Out[28]: one two threefirst second1 a -0.377535 0.000000 NaN b -1.569069 0.000000 -1.962513 c -0.783123 0.000000 -0.2509332 a NaN -1.493173 -2.385688
Series and Index also support thedivmod()
builtin. This function takesthe floor division and modulo operation at the same time returning a two-tupleof the same type as the left hand side. For example:
In [29]:s=pd.Series(np.arange(10))In [30]:sOut[30]:0 01 12 23 34 45 56 67 78 89 9dtype: int64In [31]:div,rem=divmod(s,3)In [32]:divOut[32]:0 01 02 03 14 15 16 27 28 29 3dtype: int64In [33]:remOut[33]:0 01 12 23 04 15 26 07 18 29 0dtype: int64In [34]:idx=pd.Index(np.arange(10))In [35]:idxOut[35]:Index([0, 1, 2, 3, 4, 5, 6, 7, 8, 9], dtype='int64')In [36]:div,rem=divmod(idx,3)In [37]:divOut[37]:Index([0, 0, 0, 1, 1, 1, 2, 2, 2, 3], dtype='int64')In [38]:remOut[38]:Index([0, 1, 2, 0, 1, 2, 0, 1, 2, 0], dtype='int64')
We can also do elementwisedivmod()
:
In [39]:div,rem=divmod(s,[2,2,3,3,4,4,5,5,6,6])In [40]:divOut[40]:0 01 02 03 14 15 16 17 18 19 1dtype: int64In [41]:remOut[41]:0 01 12 23 04 05 16 17 28 29 3dtype: int64
Missing data / operations with fill values#
In Series and DataFrame, the arithmetic functions have the option of inputtingafill_value, namely a value to substitute when at most one of the values ata location are missing. For example, when adding two DataFrame objects, you maywish to treat NaN as 0 unless both DataFrames are missing that value, in whichcase the result will be NaN (you can later replace NaN with some other valueusingfillna
if you wish).
In [42]:df2=df.copy()In [43]:df2.loc["a","three"]=1.0In [44]:dfOut[44]: one two threea 1.394981 1.772517 NaNb 0.343054 1.912123 -0.050390c 0.695246 1.478369 1.227435d NaN 0.279344 -0.613172In [45]:df2Out[45]: one two threea 1.394981 1.772517 1.000000b 0.343054 1.912123 -0.050390c 0.695246 1.478369 1.227435d NaN 0.279344 -0.613172In [46]:df+df2Out[46]: one two threea 2.789963 3.545034 NaNb 0.686107 3.824246 -0.100780c 1.390491 2.956737 2.454870d NaN 0.558688 -1.226343In [47]:df.add(df2,fill_value=0)Out[47]: one two threea 2.789963 3.545034 1.000000b 0.686107 3.824246 -0.100780c 1.390491 2.956737 2.454870d NaN 0.558688 -1.226343
Flexible comparisons#
Series and DataFrame have the binary comparison methodseq
,ne
,lt
,gt
,le
, andge
whose behavior is analogous to the binaryarithmetic operations described above:
In [48]:df.gt(df2)Out[48]: one two threea False False Falseb False False Falsec False False Falsed False False FalseIn [49]:df2.ne(df)Out[49]: one two threea False False Trueb False False Falsec False False Falsed True False False
These operations produce a pandas object of the same type as the left-hand-sideinput that is of dtypebool
. Theseboolean
objects can be used inindexing operations, see the section onBoolean indexing.
Boolean reductions#
You can apply the reductions:empty
,any()
,all()
, andbool()
to provide away to summarize a boolean result.
In [50]:(df>0).all()Out[50]:one Falsetwo Truethree Falsedtype: boolIn [51]:(df>0).any()Out[51]:one Truetwo Truethree Truedtype: bool
You can reduce to a final boolean value.
In [52]:(df>0).any().any()Out[52]:True
You can test if a pandas object is empty, via theempty
property.
In [53]:df.emptyOut[53]:FalseIn [54]:pd.DataFrame(columns=list("ABC")).emptyOut[54]:True
Warning
Asserting the truthiness of a pandas object will raise an error, as the testing of the emptinessor values is ambiguous.
In [55]:ifdf: ....:print(True) ....:---------------------------------------------------------------------------ValueErrorTraceback (most recent call last)<ipython-input-55-318d08b2571a> in?()---->1ifdf:2print(True)~/work/pandas/pandas/pandas/core/generic.py in?(self)1575@final1576def__nonzero__(self)->NoReturn:->1577raiseValueError(1578f"The truth value of a{type(self).__name__} is ambiguous. "1579"Use a.empty, a.bool(), a.item(), a.any() or a.all()."1580)ValueError: The truth value of a DataFrame is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().
In [56]:dfanddf2---------------------------------------------------------------------------ValueErrorTraceback (most recent call last)<ipython-input-56-b241b64bb471> in?()---->1dfanddf2~/work/pandas/pandas/pandas/core/generic.py in?(self)1575@final1576def__nonzero__(self)->NoReturn:->1577raiseValueError(1578f"The truth value of a{type(self).__name__} is ambiguous. "1579"Use a.empty, a.bool(), a.item(), a.any() or a.all()."1580)ValueError: The truth value of a DataFrame is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().
Seegotchas for a more detailed discussion.
Comparing if objects are equivalent#
Often you may find that there is more than one way to compute the sameresult. As a simple example, considerdf+df
anddf*2
. To testthat these two computations produce the same result, given the toolsshown above, you might imagine using(df+df==df*2).all()
. But infact, this expression is False:
In [57]:df+df==df*2Out[57]: one two threea True True Falseb True True Truec True True Trued False True TrueIn [58]:(df+df==df*2).all()Out[58]:one Falsetwo Truethree Falsedtype: bool
Notice that the boolean DataFramedf+df==df*2
contains some False values!This is because NaNs do not compare as equals:
In [59]:np.nan==np.nanOut[59]:False
So, NDFrames (such as Series and DataFrames)have anequals()
method for testing equality, with NaNs incorresponding locations treated as equal.
In [60]:(df+df).equals(df*2)Out[60]:True
Note that the Series or DataFrame index needs to be in the same order forequality to be True:
In [61]:df1=pd.DataFrame({"col":["foo",0,np.nan]})In [62]:df2=pd.DataFrame({"col":[np.nan,0,"foo"]},index=[2,1,0])In [63]:df1.equals(df2)Out[63]:FalseIn [64]:df1.equals(df2.sort_index())Out[64]:True
Comparing array-like objects#
You can conveniently perform element-wise comparisons when comparing a pandasdata structure with a scalar value:
In [65]:pd.Series(["foo","bar","baz"])=="foo"Out[65]:0 True1 False2 Falsedtype: boolIn [66]:pd.Index(["foo","bar","baz"])=="foo"Out[66]:array([ True, False, False])
pandas also handles element-wise comparisons between different array-likeobjects of the same length:
In [67]:pd.Series(["foo","bar","baz"])==pd.Index(["foo","bar","qux"])Out[67]:0 True1 True2 Falsedtype: boolIn [68]:pd.Series(["foo","bar","baz"])==np.array(["foo","bar","qux"])Out[68]:0 True1 True2 Falsedtype: bool
Trying to compareIndex
orSeries
objects of different lengths willraise a ValueError:
In [69]:pd.Series(['foo','bar','baz'])==pd.Series(['foo','bar'])---------------------------------------------------------------------------ValueErrorTraceback (most recent call last)CellIn[69],line1---->1pd.Series(['foo','bar','baz'])==pd.Series(['foo','bar'])File ~/work/pandas/pandas/pandas/core/ops/common.py:76, in_unpack_zerodim_and_defer.<locals>.new_method(self, other)72returnNotImplemented74other=item_from_zerodim(other)--->76returnmethod(self,other)File ~/work/pandas/pandas/pandas/core/arraylike.py:40, inOpsMixin.__eq__(self, other)38@unpack_zerodim_and_defer("__eq__")39def__eq__(self,other):--->40returnself._cmp_method(other,operator.eq)File ~/work/pandas/pandas/pandas/core/series.py:6114, inSeries._cmp_method(self, other, op)6111res_name=ops.get_op_result_name(self,other)6113ifisinstance(other,Series)andnotself._indexed_same(other):->6114raiseValueError("Can only compare identically-labeled Series objects")6116lvalues=self._values6117rvalues=extract_array(other,extract_numpy=True,extract_range=True)ValueError: Can only compare identically-labeled Series objectsIn [70]:pd.Series(['foo','bar','baz'])==pd.Series(['foo'])---------------------------------------------------------------------------ValueErrorTraceback (most recent call last)CellIn[70],line1---->1pd.Series(['foo','bar','baz'])==pd.Series(['foo'])File ~/work/pandas/pandas/pandas/core/ops/common.py:76, in_unpack_zerodim_and_defer.<locals>.new_method(self, other)72returnNotImplemented74other=item_from_zerodim(other)--->76returnmethod(self,other)File ~/work/pandas/pandas/pandas/core/arraylike.py:40, inOpsMixin.__eq__(self, other)38@unpack_zerodim_and_defer("__eq__")39def__eq__(self,other):--->40returnself._cmp_method(other,operator.eq)File ~/work/pandas/pandas/pandas/core/series.py:6114, inSeries._cmp_method(self, other, op)6111res_name=ops.get_op_result_name(self,other)6113ifisinstance(other,Series)andnotself._indexed_same(other):->6114raiseValueError("Can only compare identically-labeled Series objects")6116lvalues=self._values6117rvalues=extract_array(other,extract_numpy=True,extract_range=True)ValueError: Can only compare identically-labeled Series objects
Combining overlapping data sets#
A problem occasionally arising is the combination of two similar data setswhere values in one are preferred over the other. An example would be two dataseries representing a particular economic indicator where one is considered tobe of “higher quality”. However, the lower quality series might extend furtherback in history or have more complete data coverage. As such, we would like tocombine two DataFrame objects where missing values in one DataFrame areconditionally filled with like-labeled values from the other DataFrame. Thefunction implementing this operation iscombine_first()
,which we illustrate:
In [71]:df1=pd.DataFrame( ....:{"A":[1.0,np.nan,3.0,5.0,np.nan],"B":[np.nan,2.0,3.0,np.nan,6.0]} ....:) ....:In [72]:df2=pd.DataFrame( ....:{ ....:"A":[5.0,2.0,4.0,np.nan,3.0,7.0], ....:"B":[np.nan,np.nan,3.0,4.0,6.0,8.0], ....:} ....:) ....:In [73]:df1Out[73]: A B0 1.0 NaN1 NaN 2.02 3.0 3.03 5.0 NaN4 NaN 6.0In [74]:df2Out[74]: A B0 5.0 NaN1 2.0 NaN2 4.0 3.03 NaN 4.04 3.0 6.05 7.0 8.0In [75]:df1.combine_first(df2)Out[75]: A B0 1.0 NaN1 2.0 2.02 3.0 3.03 5.0 4.04 3.0 6.05 7.0 8.0
General DataFrame combine#
Thecombine_first()
method above calls the more generalDataFrame.combine()
. This method takes another DataFrameand a combiner function, aligns the input DataFrame and then passes the combinerfunction pairs of Series (i.e., columns whose names are the same).
So, for instance, to reproducecombine_first()
as above:
In [76]:defcombiner(x,y): ....:returnnp.where(pd.isna(x),y,x) ....:In [77]:df1.combine(df2,combiner)Out[77]: A B0 1.0 NaN1 2.0 2.02 3.0 3.03 5.0 4.04 3.0 6.05 7.0 8.0
Descriptive statistics#
There exists a large number of methods for computing descriptive statistics andother related operations onSeries,DataFrame. Most of theseare aggregations (hence producing a lower-dimensional result) likesum()
,mean()
, andquantile()
,but some of them, likecumsum()
andcumprod()
,produce an object of the same size. Generally speaking, these methods take anaxis argument, just likendarray.{sum, std, …}, but the axis can bespecified by name or integer:
Series: no axis argument needed
DataFrame: “index” (axis=0, default), “columns” (axis=1)
For example:
In [78]:dfOut[78]: one two threea 1.394981 1.772517 NaNb 0.343054 1.912123 -0.050390c 0.695246 1.478369 1.227435d NaN 0.279344 -0.613172In [79]:df.mean(0)Out[79]:one 0.811094two 1.360588three 0.187958dtype: float64In [80]:df.mean(1)Out[80]:a 1.583749b 0.734929c 1.133683d -0.166914dtype: float64
All such methods have askipna
option signaling whether to exclude missingdata (True
by default):
In [81]:df.sum(0,skipna=False)Out[81]:one NaNtwo 5.442353three NaNdtype: float64In [82]:df.sum(axis=1,skipna=True)Out[82]:a 3.167498b 2.204786c 3.401050d -0.333828dtype: float64
Combined with the broadcasting / arithmetic behavior, one can describe variousstatistical procedures, like standardization (rendering data zero mean andstandard deviation of 1), very concisely:
In [83]:ts_stand=(df-df.mean())/df.std()In [84]:ts_stand.std()Out[84]:one 1.0two 1.0three 1.0dtype: float64In [85]:xs_stand=df.sub(df.mean(1),axis=0).div(df.std(1),axis=0)In [86]:xs_stand.std(1)Out[86]:a 1.0b 1.0c 1.0d 1.0dtype: float64
Note that methods likecumsum()
andcumprod()
preserve the location ofNaN
values. This is somewhat different fromexpanding()
androlling()
sinceNaN
behavioris furthermore dictated by amin_periods
parameter.
In [87]:df.cumsum()Out[87]: one two threea 1.394981 1.772517 NaNb 1.738035 3.684640 -0.050390c 2.433281 5.163008 1.177045d NaN 5.442353 0.563873
Here is a quick reference summary table of common functions. Each also takes anoptionallevel
parameter which applies only if the object has ahierarchical index.
Function | Description |
---|---|
| Number of non-NA observations |
| Sum of values |
| Mean of values |
| Arithmetic median of values |
| Minimum |
| Maximum |
| Mode |
| Absolute Value |
| Product of values |
| Bessel-corrected sample standard deviation |
| Unbiased variance |
| Standard error of the mean |
| Sample skewness (3rd moment) |
| Sample kurtosis (4th moment) |
| Sample quantile (value at %) |
| Cumulative sum |
| Cumulative product |
| Cumulative maximum |
| Cumulative minimum |
Note that by chance some NumPy methods, likemean
,std
, andsum
,will exclude NAs on Series input by default:
In [88]:np.mean(df["one"])Out[88]:0.8110935116651192In [89]:np.mean(df["one"].to_numpy())Out[89]:nan
Series.nunique()
will return the number of unique non-NA values in aSeries:
In [90]:series=pd.Series(np.random.randn(500))In [91]:series[20:500]=np.nanIn [92]:series[10:20]=5In [93]:series.nunique()Out[93]:11
Summarizing data: describe#
There is a convenientdescribe()
function which computes a variety of summarystatistics about a Series or the columns of a DataFrame (excluding NAs ofcourse):
In [94]:series=pd.Series(np.random.randn(1000))In [95]:series[::2]=np.nanIn [96]:series.describe()Out[96]:count 500.000000mean -0.021292std 1.015906min -2.68376325% -0.69907050% -0.06971875% 0.714483max 3.160915dtype: float64In [97]:frame=pd.DataFrame(np.random.randn(1000,5),columns=["a","b","c","d","e"])In [98]:frame.iloc[::2]=np.nanIn [99]:frame.describe()Out[99]: a b c d ecount 500.000000 500.000000 500.000000 500.000000 500.000000mean 0.033387 0.030045 -0.043719 -0.051686 0.005979std 1.017152 0.978743 1.025270 1.015988 1.006695min -3.000951 -2.637901 -3.303099 -3.159200 -3.18882125% -0.647623 -0.576449 -0.712369 -0.691338 -0.69111550% 0.047578 -0.021499 -0.023888 -0.032652 -0.02536375% 0.729907 0.775880 0.618896 0.670047 0.649748max 2.740139 2.752332 3.004229 2.728702 3.240991
You can select specific percentiles to include in the output:
In [100]:series.describe(percentiles=[0.05,0.25,0.75,0.95])Out[100]:count 500.000000mean -0.021292std 1.015906min -2.6837635% -1.64542325% -0.69907050% -0.06971875% 0.71448395% 1.711409max 3.160915dtype: float64
By default, the median is always included.
For a non-numerical Series object,describe()
will give a simplesummary of the number of unique values and most frequently occurring values:
In [101]:s=pd.Series(["a","a","b","b","a","a",np.nan,"c","d","a"])In [102]:s.describe()Out[102]:count 9unique 4top afreq 5dtype: object
Note that on a mixed-type DataFrame object,describe()
willrestrict the summary to include only numerical columns or, if none are, onlycategorical columns:
In [103]:frame=pd.DataFrame({"a":["Yes","Yes","No","No"],"b":range(4)})In [104]:frame.describe()Out[104]: bcount 4.000000mean 1.500000std 1.290994min 0.00000025% 0.75000050% 1.50000075% 2.250000max 3.000000
This behavior can be controlled by providing a list of types asinclude
/exclude
arguments. The special valueall
can also be used:
In [105]:frame.describe(include=["object"])Out[105]: acount 4unique 2top Yesfreq 2In [106]:frame.describe(include=["number"])Out[106]: bcount 4.000000mean 1.500000std 1.290994min 0.00000025% 0.75000050% 1.50000075% 2.250000max 3.000000In [107]:frame.describe(include="all")Out[107]: a bcount 4 4.000000unique 2 NaNtop Yes NaNfreq 2 NaNmean NaN 1.500000std NaN 1.290994min NaN 0.00000025% NaN 0.75000050% NaN 1.50000075% NaN 2.250000max NaN 3.000000
That feature relies onselect_dtypes. Refer tothere for details about accepted inputs.
Index of min/max values#
Theidxmin()
andidxmax()
functions on Seriesand DataFrame compute the index labels with the minimum and maximumcorresponding values:
In [108]:s1=pd.Series(np.random.randn(5))In [109]:s1Out[109]:0 1.1180761 -0.3520512 -1.2428833 -1.2771554 -0.641184dtype: float64In [110]:s1.idxmin(),s1.idxmax()Out[110]:(3, 0)In [111]:df1=pd.DataFrame(np.random.randn(5,3),columns=["A","B","C"])In [112]:df1Out[112]: A B C0 -0.327863 -0.946180 -0.1375701 -0.186235 -0.257213 -0.4865672 -0.507027 -0.871259 -0.1111103 2.000339 -2.430505 0.0897594 -0.321434 -0.033695 0.096271In [113]:df1.idxmin(axis=0)Out[113]:A 2B 3C 1dtype: int64In [114]:df1.idxmax(axis=1)Out[114]:0 C1 A2 C3 A4 Cdtype: object
When there are multiple rows (or columns) matching the minimum or maximumvalue,idxmin()
andidxmax()
return the firstmatching index:
In [115]:df3=pd.DataFrame([2,1,1,3,np.nan],columns=["A"],index=list("edcba"))In [116]:df3Out[116]: Ae 2.0d 1.0c 1.0b 3.0a NaNIn [117]:df3["A"].idxmin()Out[117]:'d'
Note
idxmin
andidxmax
are calledargmin
andargmax
in NumPy.
Value counts (histogramming) / mode#
Thevalue_counts()
Series method computes a histogramof a 1D array of values. It can also be used as a function on regular arrays:
In [118]:data=np.random.randint(0,7,size=50)In [119]:dataOut[119]:array([6, 6, 2, 3, 5, 3, 2, 5, 4, 5, 4, 3, 4, 5, 0, 2, 0, 4, 2, 0, 3, 2, 2, 5, 6, 5, 3, 4, 6, 4, 3, 5, 6, 4, 3, 6, 2, 6, 6, 2, 3, 4, 2, 1, 6, 2, 6, 1, 5, 4])In [120]:s=pd.Series(data)In [121]:s.value_counts()Out[121]:6 102 104 93 85 80 31 2Name: count, dtype: int64
Thevalue_counts()
method can be used to count combinations across multiple columns.By default all columns are used but a subset can be selected using thesubset
argument.
In [122]:data={"a":[1,2,3,4],"b":["x","x","y","y"]}In [123]:frame=pd.DataFrame(data)In [124]:frame.value_counts()Out[124]:a b1 x 12 x 13 y 14 y 1Name: count, dtype: int64
Similarly, you can get the most frequently occurring value(s), i.e. the mode, of the values in a Series or DataFrame:
In [125]:s5=pd.Series([1,1,3,3,3,5,5,7,7,7])In [126]:s5.mode()Out[126]:0 31 7dtype: int64In [127]:df5=pd.DataFrame( .....:{ .....:"A":np.random.randint(0,7,size=50), .....:"B":np.random.randint(-10,15,size=50), .....:} .....:) .....:In [128]:df5.mode()Out[128]: A B0 1.0 -91 NaN 102 NaN 13
Discretization and quantiling#
Continuous values can be discretized using thecut()
(bins based on values)andqcut()
(bins based on sample quantiles) functions:
In [129]:arr=np.random.randn(20)In [130]:factor=pd.cut(arr,4)In [131]:factorOut[131]:[(-0.251, 0.464], (-0.968, -0.251], (0.464, 1.179], (-0.251, 0.464], (-0.968, -0.251], ..., (-0.251, 0.464], (-0.968, -0.251], (-0.968, -0.251], (-0.968, -0.251], (-0.968, -0.251]]Length: 20Categories (4, interval[float64, right]): [(-0.968, -0.251] < (-0.251, 0.464] < (0.464, 1.179] < (1.179, 1.893]]In [132]:factor=pd.cut(arr,[-5,-1,0,1,5])In [133]:factorOut[133]:[(0, 1], (-1, 0], (0, 1], (0, 1], (-1, 0], ..., (-1, 0], (-1, 0], (-1, 0], (-1, 0], (-1, 0]]Length: 20Categories (4, interval[int64, right]): [(-5, -1] < (-1, 0] < (0, 1] < (1, 5]]
qcut()
computes sample quantiles. For example, we could slice up somenormally distributed data into equal-size quartiles like so:
In [134]:arr=np.random.randn(30)In [135]:factor=pd.qcut(arr,[0,0.25,0.5,0.75,1])In [136]:factorOut[136]:[(0.569, 1.184], (-2.278, -0.301], (-2.278, -0.301], (0.569, 1.184], (0.569, 1.184], ..., (-0.301, 0.569], (1.184, 2.346], (1.184, 2.346], (-0.301, 0.569], (-2.278, -0.301]]Length: 30Categories (4, interval[float64, right]): [(-2.278, -0.301] < (-0.301, 0.569] < (0.569, 1.184] < (1.184, 2.346]]
We can also pass infinite values to define the bins:
In [137]:arr=np.random.randn(20)In [138]:factor=pd.cut(arr,[-np.inf,0,np.inf])In [139]:factorOut[139]:[(-inf, 0.0], (0.0, inf], (0.0, inf], (-inf, 0.0], (-inf, 0.0], ..., (-inf, 0.0], (-inf, 0.0], (-inf, 0.0], (0.0, inf], (0.0, inf]]Length: 20Categories (2, interval[float64, right]): [(-inf, 0.0] < (0.0, inf]]
Function application#
To apply your own or another library’s functions to pandas objects,you should be aware of the three methods below. The appropriatemethod to use depends on whether your function expects to operateon an entireDataFrame
orSeries
, row- or column-wise, or elementwise.
Tablewise function application#
DataFrames
andSeries
can be passed into functions.However, if the function needs to be called in a chain, consider using thepipe()
method.
First some setup:
In [140]:defextract_city_name(df): .....:""" .....: Chicago, IL -> Chicago for city_name column .....: """ .....:df["city_name"]=df["city_and_code"].str.split(",").str.get(0) .....:returndf .....:In [141]:defadd_country_name(df,country_name=None): .....:""" .....: Chicago -> Chicago-US for city_name column .....: """ .....:col="city_name" .....:df["city_and_country"]=df[col]+country_name .....:returndf .....:In [142]:df_p=pd.DataFrame({"city_and_code":["Chicago, IL"]})
extract_city_name
andadd_country_name
are functions taking and returningDataFrames
.
Now compare the following:
In [143]:add_country_name(extract_city_name(df_p),country_name="US")Out[143]: city_and_code city_name city_and_country0 Chicago, IL Chicago ChicagoUS
Is equivalent to:
In [144]:df_p.pipe(extract_city_name).pipe(add_country_name,country_name="US")Out[144]: city_and_code city_name city_and_country0 Chicago, IL Chicago ChicagoUS
pandas encourages the second style, which is known as method chaining.pipe
makes it easy to use your own or another library’s functionsin method chains, alongside pandas’ methods.
In the example above, the functionsextract_city_name
andadd_country_name
each expected aDataFrame
as the first positional argument.What if the function you wish to apply takes its data as, say, the second argument?In this case, providepipe
with a tuple of(callable,data_keyword)
..pipe
will route theDataFrame
to the argument specified in the tuple.
For example, we can fit a regression using statsmodels. Their API expects a formula first and aDataFrame
as the second argument,data
. We pass in the function, keyword pair(sm.ols,'data')
topipe
:
In [147]:importstatsmodels.formula.apiassmIn [148]:bb=pd.read_csv("data/baseball.csv",index_col="id")In [149]:( .....:bb.query("h > 0") .....:.assign(ln_h=lambdadf:np.log(df.h)) .....:.pipe((sm.ols,"data"),"hr ~ ln_h + year + g + C(lg)") .....:.fit() .....:.summary() .....:) .....:Out[149]:<class 'statsmodels.iolib.summary.Summary'>""" OLS Regression Results==============================================================================Dep. Variable: hr R-squared: 0.685Model: OLS Adj. R-squared: 0.665Method: Least Squares F-statistic: 34.28Date: Tue, 22 Nov 2022 Prob (F-statistic): 3.48e-15Time: 05:34:17 Log-Likelihood: -205.92No. Observations: 68 AIC: 421.8Df Residuals: 63 BIC: 432.9Df Model: 4Covariance Type: nonrobust=============================================================================== coef std err t P>|t| [0.025 0.975]-------------------------------------------------------------------------------Intercept-8484.77204664.146-1.8190.074-1.78e+04835.780C(lg)[T.NL]-2.27361.325-1.7160.091-4.9220.375ln_h-1.35420.875-1.5470.127-3.1030.395year4.22772.3241.8190.074-0.4178.872g0.18410.0296.2580.0000.1250.243==============================================================================Omnibus: 10.875 Durbin-Watson: 1.999Prob(Omnibus):0.004Jarque-Bera(JB):17.298Skew: 0.537 Prob(JB): 0.000175Kurtosis: 5.225 Cond. No. 1.49e+07==============================================================================Notes:[1]StandardErrorsassumethatthecovariancematrixoftheerrorsiscorrectlyspecified.[2]Theconditionnumberislarge,1.49e+07.Thismightindicatethattherearestrongmulticollinearityorothernumericalproblems."""
The pipe method is inspired by unix pipes and more recentlydplyr andmagrittr, whichhave introduced the popular(%>%)
(read pipe) operator forR.The implementation ofpipe
here is quite clean and feels right at home in Python.We encourage you to view the source code ofpipe()
.
Row or column-wise function application#
Arbitrary functions can be applied along the axes of a DataFrameusing theapply()
method, which, like the descriptivestatistics methods, takes an optionalaxis
argument:
In [145]:df.apply(lambdax:np.mean(x))Out[145]:one 0.811094two 1.360588three 0.187958dtype: float64In [146]:df.apply(lambdax:np.mean(x),axis=1)Out[146]:a 1.583749b 0.734929c 1.133683d -0.166914dtype: float64In [147]:df.apply(lambdax:x.max()-x.min())Out[147]:one 1.051928two 1.632779three 1.840607dtype: float64In [148]:df.apply(np.cumsum)Out[148]: one two threea 1.394981 1.772517 NaNb 1.738035 3.684640 -0.050390c 2.433281 5.163008 1.177045d NaN 5.442353 0.563873In [149]:df.apply(np.exp)Out[149]: one two threea 4.034899 5.885648 NaNb 1.409244 6.767440 0.950858c 2.004201 4.385785 3.412466d NaN 1.322262 0.541630
Theapply()
method will also dispatch on a string method name.
In [150]:df.apply("mean")Out[150]:one 0.811094two 1.360588three 0.187958dtype: float64In [151]:df.apply("mean",axis=1)Out[151]:a 1.583749b 0.734929c 1.133683d -0.166914dtype: float64
The return type of the function passed toapply()
affects thetype of the final output fromDataFrame.apply
for the default behaviour:
If the applied function returns a
Series
, the final output is aDataFrame
.The columns match the index of theSeries
returned by the applied function.If the applied function returns any other type, the final output is a
Series
.
This default behaviour can be overridden using theresult_type
, whichaccepts three options:reduce
,broadcast
, andexpand
.These will determine how list-likes return values expand (or not) to aDataFrame
.
apply()
combined with some cleverness can be used to answer many questionsabout a data set. For example, suppose we wanted to extract the date where themaximum value for each column occurred:
In [152]:tsdf=pd.DataFrame( .....:np.random.randn(1000,3), .....:columns=["A","B","C"], .....:index=pd.date_range("1/1/2000",periods=1000), .....:) .....:In [153]:tsdf.apply(lambdax:x.idxmax())Out[153]:A 2000-08-06B 2001-01-18C 2001-07-18dtype: datetime64[ns]
You may also pass additional arguments and keyword arguments to theapply()
method.
In [154]:defsubtract_and_divide(x,sub,divide=1): .....:return(x-sub)/divide .....:In [155]:df_udf=pd.DataFrame(np.ones((2,2)))In [156]:df_udf.apply(subtract_and_divide,args=(5,),divide=3)Out[156]: 0 10 -1.333333 -1.3333331 -1.333333 -1.333333
Another useful feature is the ability to pass Series methods to carry out someSeries operation on each column or row:
In [157]:tsdf=pd.DataFrame( .....:np.random.randn(10,3), .....:columns=["A","B","C"], .....:index=pd.date_range("1/1/2000",periods=10), .....:) .....:In [158]:tsdf.iloc[3:7]=np.nanIn [159]:tsdfOut[159]: A B C2000-01-01 -0.158131 -0.232466 0.3216042000-01-02 -1.810340 -3.105758 0.4338342000-01-03 -1.209847 -1.156793 -0.1367942000-01-04 NaN NaN NaN2000-01-05 NaN NaN NaN2000-01-06 NaN NaN NaN2000-01-07 NaN NaN NaN2000-01-08 -0.653602 0.178875 1.0082982000-01-09 1.007996 0.462824 0.2544722000-01-10 0.307473 0.600337 1.643950In [160]:tsdf.apply(pd.Series.interpolate)Out[160]: A B C2000-01-01 -0.158131 -0.232466 0.3216042000-01-02 -1.810340 -3.105758 0.4338342000-01-03 -1.209847 -1.156793 -0.1367942000-01-04 -1.098598 -0.889659 0.0922252000-01-05 -0.987349 -0.622526 0.3212432000-01-06 -0.876100 -0.355392 0.5502622000-01-07 -0.764851 -0.088259 0.7792802000-01-08 -0.653602 0.178875 1.0082982000-01-09 1.007996 0.462824 0.2544722000-01-10 0.307473 0.600337 1.643950
Finally,apply()
takes an argumentraw
which is False by default, whichconverts each row or column into a Series before applying the function. Whenset to True, the passed function will instead receive an ndarray object, whichhas positive performance implications if you do not need the indexingfunctionality.
Aggregation API#
The aggregation API allows one to express possibly multiple aggregation operations in a single concise way.This API is similar across pandas objects, seegroupby API, thewindow API, and theresample API.The entry point for aggregation isDataFrame.aggregate()
, or the aliasDataFrame.agg()
.
We will use a similar starting frame from above:
In [161]:tsdf=pd.DataFrame( .....:np.random.randn(10,3), .....:columns=["A","B","C"], .....:index=pd.date_range("1/1/2000",periods=10), .....:) .....:In [162]:tsdf.iloc[3:7]=np.nanIn [163]:tsdfOut[163]: A B C2000-01-01 1.257606 1.004194 0.1675742000-01-02 -0.749892 0.288112 -0.7573042000-01-03 -0.207550 -0.298599 0.1160182000-01-04 NaN NaN NaN2000-01-05 NaN NaN NaN2000-01-06 NaN NaN NaN2000-01-07 NaN NaN NaN2000-01-08 0.814347 -0.257623 0.8692262000-01-09 -0.250663 -1.206601 0.8968392000-01-10 2.169758 -1.333363 0.283157
Using a single function is equivalent toapply()
. You can alsopass named methods as strings. These will return aSeries
of the aggregatedoutput:
In [164]:tsdf.agg(lambdax:np.sum(x))Out[164]:A 3.033606B -1.803879C 1.575510dtype: float64In [165]:tsdf.agg("sum")Out[165]:A 3.033606B -1.803879C 1.575510dtype: float64# these are equivalent to a ``.sum()`` because we are aggregating# on a single functionIn [166]:tsdf.sum()Out[166]:A 3.033606B -1.803879C 1.575510dtype: float64
Single aggregations on aSeries
this will return a scalar value:
In [167]:tsdf["A"].agg("sum")Out[167]:3.033606102414146
Aggregating with multiple functions#
You can pass multiple aggregation arguments as a list.The results of each of the passed functions will be a row in the resultingDataFrame
.These are naturally named from the aggregation function.
In [168]:tsdf.agg(["sum"])Out[168]: A B Csum 3.033606 -1.803879 1.57551
Multiple functions yield multiple rows:
In [169]:tsdf.agg(["sum","mean"])Out[169]: A B Csum 3.033606 -1.803879 1.575510mean 0.505601 -0.300647 0.262585
On aSeries
, multiple functions return aSeries
, indexed by the function names:
In [170]:tsdf["A"].agg(["sum","mean"])Out[170]:sum 3.033606mean 0.505601Name: A, dtype: float64
Passing alambda
function will yield a<lambda>
named row:
In [171]:tsdf["A"].agg(["sum",lambdax:x.mean()])Out[171]:sum 3.033606<lambda> 0.505601Name: A, dtype: float64
Passing a named function will yield that name for the row:
In [172]:defmymean(x): .....:returnx.mean() .....:In [173]:tsdf["A"].agg(["sum",mymean])Out[173]:sum 3.033606mymean 0.505601Name: A, dtype: float64
Aggregating with a dict#
Passing a dictionary of column names to a scalar or a list of scalars, toDataFrame.agg
allows you to customize which functions are applied to which columns. Note that the resultsare not in any particular order, you can use anOrderedDict
instead to guarantee ordering.
In [174]:tsdf.agg({"A":"mean","B":"sum"})Out[174]:A 0.505601B -1.803879dtype: float64
Passing a list-like will generate aDataFrame
output. You will get a matrix-like outputof all of the aggregators. The output will consist of all unique functions. Those that arenot noted for a particular column will beNaN
:
In [175]:tsdf.agg({"A":["mean","min"],"B":"sum"})Out[175]: A Bmean 0.505601 NaNmin -0.749892 NaNsum NaN -1.803879
Custom describe#
With.agg()
it is possible to easily create a custom describe function, similarto the built indescribe function.
In [176]:fromfunctoolsimportpartialIn [177]:q_25=partial(pd.Series.quantile,q=0.25)In [178]:q_25.__name__="25%"In [179]:q_75=partial(pd.Series.quantile,q=0.75)In [180]:q_75.__name__="75%"In [181]:tsdf.agg(["count","mean","std","min",q_25,"median",q_75,"max"])Out[181]: A B Ccount 6.000000 6.000000 6.000000mean 0.505601 -0.300647 0.262585std 1.103362 0.887508 0.606860min -0.749892 -1.333363 -0.75730425% -0.239885 -0.979600 0.128907median 0.303398 -0.278111 0.22536575% 1.146791 0.151678 0.722709max 2.169758 1.004194 0.896839
Transform API#
Thetransform()
method returns an object that is indexed the same (same size)as the original. This API allows you to providemultiple operations at the sametime rather than one-by-one. Its API is quite similar to the.agg
API.
We create a frame similar to the one used in the above sections.
In [182]:tsdf=pd.DataFrame( .....:np.random.randn(10,3), .....:columns=["A","B","C"], .....:index=pd.date_range("1/1/2000",periods=10), .....:) .....:In [183]:tsdf.iloc[3:7]=np.nanIn [184]:tsdfOut[184]: A B C2000-01-01 -0.428759 -0.864890 -0.6753412000-01-02 -0.168731 1.338144 -1.2793212000-01-03 -1.621034 0.438107 0.9037942000-01-04 NaN NaN NaN2000-01-05 NaN NaN NaN2000-01-06 NaN NaN NaN2000-01-07 NaN NaN NaN2000-01-08 0.254374 -1.240447 -0.2010522000-01-09 -0.157795 0.791197 -1.1442092000-01-10 -0.030876 0.371900 0.061932
Transform the entire frame..transform()
allows input functions as: a NumPy function, a stringfunction name or a user defined function.
In [185]:tsdf.transform(np.abs)Out[185]: A B C2000-01-01 0.428759 0.864890 0.6753412000-01-02 0.168731 1.338144 1.2793212000-01-03 1.621034 0.438107 0.9037942000-01-04 NaN NaN NaN2000-01-05 NaN NaN NaN2000-01-06 NaN NaN NaN2000-01-07 NaN NaN NaN2000-01-08 0.254374 1.240447 0.2010522000-01-09 0.157795 0.791197 1.1442092000-01-10 0.030876 0.371900 0.061932In [186]:tsdf.transform("abs")Out[186]: A B C2000-01-01 0.428759 0.864890 0.6753412000-01-02 0.168731 1.338144 1.2793212000-01-03 1.621034 0.438107 0.9037942000-01-04 NaN NaN NaN2000-01-05 NaN NaN NaN2000-01-06 NaN NaN NaN2000-01-07 NaN NaN NaN2000-01-08 0.254374 1.240447 0.2010522000-01-09 0.157795 0.791197 1.1442092000-01-10 0.030876 0.371900 0.061932In [187]:tsdf.transform(lambdax:x.abs())Out[187]: A B C2000-01-01 0.428759 0.864890 0.6753412000-01-02 0.168731 1.338144 1.2793212000-01-03 1.621034 0.438107 0.9037942000-01-04 NaN NaN NaN2000-01-05 NaN NaN NaN2000-01-06 NaN NaN NaN2000-01-07 NaN NaN NaN2000-01-08 0.254374 1.240447 0.2010522000-01-09 0.157795 0.791197 1.1442092000-01-10 0.030876 0.371900 0.061932
Heretransform()
received a single function; this is equivalent to aufunc application.
In [188]:np.abs(tsdf)Out[188]: A B C2000-01-01 0.428759 0.864890 0.6753412000-01-02 0.168731 1.338144 1.2793212000-01-03 1.621034 0.438107 0.9037942000-01-04 NaN NaN NaN2000-01-05 NaN NaN NaN2000-01-06 NaN NaN NaN2000-01-07 NaN NaN NaN2000-01-08 0.254374 1.240447 0.2010522000-01-09 0.157795 0.791197 1.1442092000-01-10 0.030876 0.371900 0.061932
Passing a single function to.transform()
with aSeries
will yield a singleSeries
in return.
In [189]:tsdf["A"].transform(np.abs)Out[189]:2000-01-01 0.4287592000-01-02 0.1687312000-01-03 1.6210342000-01-04 NaN2000-01-05 NaN2000-01-06 NaN2000-01-07 NaN2000-01-08 0.2543742000-01-09 0.1577952000-01-10 0.030876Freq: D, Name: A, dtype: float64
Transform with multiple functions#
Passing multiple functions will yield a column MultiIndexed DataFrame.The first level will be the original frame column names; the second levelwill be the names of the transforming functions.
In [190]:tsdf.transform([np.abs,lambdax:x+1])Out[190]: A B C absolute <lambda> absolute <lambda> absolute <lambda>2000-01-01 0.428759 0.571241 0.864890 0.135110 0.675341 0.3246592000-01-02 0.168731 0.831269 1.338144 2.338144 1.279321 -0.2793212000-01-03 1.621034 -0.621034 0.438107 1.438107 0.903794 1.9037942000-01-04 NaN NaN NaN NaN NaN NaN2000-01-05 NaN NaN NaN NaN NaN NaN2000-01-06 NaN NaN NaN NaN NaN NaN2000-01-07 NaN NaN NaN NaN NaN NaN2000-01-08 0.254374 1.254374 1.240447 -0.240447 0.201052 0.7989482000-01-09 0.157795 0.842205 0.791197 1.791197 1.144209 -0.1442092000-01-10 0.030876 0.969124 0.371900 1.371900 0.061932 1.061932
Passing multiple functions to a Series will yield a DataFrame. Theresulting column names will be the transforming functions.
In [191]:tsdf["A"].transform([np.abs,lambdax:x+1])Out[191]: absolute <lambda>2000-01-01 0.428759 0.5712412000-01-02 0.168731 0.8312692000-01-03 1.621034 -0.6210342000-01-04 NaN NaN2000-01-05 NaN NaN2000-01-06 NaN NaN2000-01-07 NaN NaN2000-01-08 0.254374 1.2543742000-01-09 0.157795 0.8422052000-01-10 0.030876 0.969124
Transforming with a dict#
Passing a dict of functions will allow selective transforming per column.
In [192]:tsdf.transform({"A":np.abs,"B":lambdax:x+1})Out[192]: A B2000-01-01 0.428759 0.1351102000-01-02 0.168731 2.3381442000-01-03 1.621034 1.4381072000-01-04 NaN NaN2000-01-05 NaN NaN2000-01-06 NaN NaN2000-01-07 NaN NaN2000-01-08 0.254374 -0.2404472000-01-09 0.157795 1.7911972000-01-10 0.030876 1.371900
Passing a dict of lists will generate a MultiIndexed DataFrame with theseselective transforms.
In [193]:tsdf.transform({"A":np.abs,"B":[lambdax:x+1,"sqrt"]})Out[193]: A B absolute <lambda> sqrt2000-01-01 0.428759 0.135110 NaN2000-01-02 0.168731 2.338144 1.1567822000-01-03 1.621034 1.438107 0.6618972000-01-04 NaN NaN NaN2000-01-05 NaN NaN NaN2000-01-06 NaN NaN NaN2000-01-07 NaN NaN NaN2000-01-08 0.254374 -0.240447 NaN2000-01-09 0.157795 1.791197 0.8894932000-01-10 0.030876 1.371900 0.609836
Applying elementwise functions#
Since not all functions can be vectorized (accept NumPy arrays and returnanother array or value), the methodsmap()
on DataFrameand analogouslymap()
on Series accept any Python function takinga single value and returning a single value. For example:
In [194]:df4=df.copy()In [195]:df4Out[195]: one two threea 1.394981 1.772517 NaNb 0.343054 1.912123 -0.050390c 0.695246 1.478369 1.227435d NaN 0.279344 -0.613172In [196]:deff(x): .....:returnlen(str(x)) .....:In [197]:df4["one"].map(f)Out[197]:a 18b 19c 18d 3Name: one, dtype: int64In [198]:df4.map(f)Out[198]: one two threea 18 17 3b 19 18 20c 18 18 16d 3 19 19
Series.map()
has an additional feature; it can be used to easily“link” or “map” values defined by a secondary series. This is closely relatedtomerging/joining functionality:
In [199]:s=pd.Series( .....:["six","seven","six","seven","six"],index=["a","b","c","d","e"] .....:) .....:In [200]:t=pd.Series({"six":6.0,"seven":7.0})In [201]:sOut[201]:a sixb sevenc sixd sevene sixdtype: objectIn [202]:s.map(t)Out[202]:a 6.0b 7.0c 6.0d 7.0e 6.0dtype: float64
Reindexing and altering labels#
reindex()
is the fundamental data alignment method in pandas.It is used to implement nearly all other features relying on label-alignmentfunctionality. Toreindex means to conform the data to match a given set oflabels along a particular axis. This accomplishes several things:
Reorders the existing data to match a new set of labels
Inserts missing value (NA) markers in label locations where no data forthat label existed
If specified,fill data for missing labels using logic (highly relevantto working with time series data)
Here is a simple example:
In [203]:s=pd.Series(np.random.randn(5),index=["a","b","c","d","e"])In [204]:sOut[204]:a 1.695148b 1.328614c 1.234686d -0.385845e -1.326508dtype: float64In [205]:s.reindex(["e","b","f","d"])Out[205]:e -1.326508b 1.328614f NaNd -0.385845dtype: float64
Here, thef
label was not contained in the Series and hence appears asNaN
in the result.
With a DataFrame, you can simultaneously reindex the index and columns:
In [206]:dfOut[206]: one two threea 1.394981 1.772517 NaNb 0.343054 1.912123 -0.050390c 0.695246 1.478369 1.227435d NaN 0.279344 -0.613172In [207]:df.reindex(index=["c","f","b"],columns=["three","two","one"])Out[207]: three two onec 1.227435 1.478369 0.695246f NaN NaN NaNb -0.050390 1.912123 0.343054
Note that theIndex
objects containing the actual axis labels can beshared between objects. So if we have a Series and a DataFrame, thefollowing can be done:
In [208]:rs=s.reindex(df.index)In [209]:rsOut[209]:a 1.695148b 1.328614c 1.234686d -0.385845dtype: float64In [210]:rs.indexisdf.indexOut[210]:True
This means that the reindexed Series’s index is the same Python object as theDataFrame’s index.
DataFrame.reindex()
also supports an “axis-style” calling convention,where you specify a singlelabels
argument and theaxis
it applies to.
In [211]:df.reindex(["c","f","b"],axis="index")Out[211]: one two threec 0.695246 1.478369 1.227435f NaN NaN NaNb 0.343054 1.912123 -0.050390In [212]:df.reindex(["three","two","one"],axis="columns")Out[212]: three two onea NaN 1.772517 1.394981b -0.050390 1.912123 0.343054c 1.227435 1.478369 0.695246d -0.613172 0.279344 NaN
See also
MultiIndex / Advanced Indexing is an even more concise way ofdoing reindexing.
Note
When writing performance-sensitive code, there is a good reason to spendsome time becoming a reindexing ninja:many operations are faster onpre-aligned data. Adding two unaligned DataFrames internally triggers areindexing step. For exploratory analysis you will hardly notice thedifference (becausereindex
has been heavily optimized), but when CPUcycles matter sprinkling a few explicitreindex
calls here and there canhave an impact.
Reindexing to align with another object#
You may wish to take an object and reindex its axes to be labeled the same asanother object. While the syntax for this is straightforward albeit verbose, itis a common enough operation that thereindex_like()
method isavailable to make this simpler:
In [213]:df2=df.reindex(["a","b","c"],columns=["one","two"])In [214]:df3=df2-df2.mean()In [215]:df2Out[215]: one twoa 1.394981 1.772517b 0.343054 1.912123c 0.695246 1.478369In [216]:df3Out[216]: one twoa 0.583888 0.051514b -0.468040 0.191120c -0.115848 -0.242634In [217]:df.reindex_like(df2)Out[217]: one twoa 1.394981 1.772517b 0.343054 1.912123c 0.695246 1.478369
Aligning objects with each other withalign
#
Thealign()
method is the fastest way to simultaneously align two objects. Itsupports ajoin
argument (related tojoining and merging):
join='outer'
: take the union of the indexes (default)
join='left'
: use the calling object’s index
join='right'
: use the passed object’s index
join='inner'
: intersect the indexes
It returns a tuple with both of the reindexed Series:
In [218]:s=pd.Series(np.random.randn(5),index=["a","b","c","d","e"])In [219]:s1=s[:4]In [220]:s2=s[1:]In [221]:s1.align(s2)Out[221]:(a -0.186646 b -1.692424 c -0.303893 d -1.425662 e NaN dtype: float64, a NaN b -1.692424 c -0.303893 d -1.425662 e 1.114285 dtype: float64)In [222]:s1.align(s2,join="inner")Out[222]:(b -1.692424 c -0.303893 d -1.425662 dtype: float64, b -1.692424 c -0.303893 d -1.425662 dtype: float64)In [223]:s1.align(s2,join="left")Out[223]:(a -0.186646 b -1.692424 c -0.303893 d -1.425662 dtype: float64, a NaN b -1.692424 c -0.303893 d -1.425662 dtype: float64)
For DataFrames, the join method will be applied to both the index and thecolumns by default:
In [224]:df.align(df2,join="inner")Out[224]:( one two a 1.394981 1.772517 b 0.343054 1.912123 c 0.695246 1.478369, one two a 1.394981 1.772517 b 0.343054 1.912123 c 0.695246 1.478369)
You can also pass anaxis
option to only align on the specified axis:
In [225]:df.align(df2,join="inner",axis=0)Out[225]:( one two three a 1.394981 1.772517 NaN b 0.343054 1.912123 -0.050390 c 0.695246 1.478369 1.227435, one two a 1.394981 1.772517 b 0.343054 1.912123 c 0.695246 1.478369)
If you pass a Series toDataFrame.align()
, you can choose to align bothobjects either on the DataFrame’s index or columns using theaxis
argument:
In [226]:df.align(df2.iloc[0],axis=1)Out[226]:( one three two a 1.394981 NaN 1.772517 b 0.343054 -0.050390 1.912123 c 0.695246 1.227435 1.478369 d NaN -0.613172 0.279344, one 1.394981 three NaN two 1.772517 Name: a, dtype: float64)
Filling while reindexing#
reindex()
takes an optional parametermethod
which is afilling method chosen from the following table:
Method | Action |
---|---|
pad / ffill | Fill values forward |
bfill / backfill | Fill values backward |
nearest | Fill from the nearest index value |
We illustrate these fill methods on a simple Series:
In [227]:rng=pd.date_range("1/3/2000",periods=8)In [228]:ts=pd.Series(np.random.randn(8),index=rng)In [229]:ts2=ts.iloc[[0,3,6]]In [230]:tsOut[230]:2000-01-03 0.1830512000-01-04 0.4005282000-01-05 -0.0150832000-01-06 2.3954892000-01-07 1.4148062000-01-08 0.1184282000-01-09 0.7336392000-01-10 -0.936077Freq: D, dtype: float64In [231]:ts2Out[231]:2000-01-03 0.1830512000-01-06 2.3954892000-01-09 0.733639Freq: 3D, dtype: float64In [232]:ts2.reindex(ts.index)Out[232]:2000-01-03 0.1830512000-01-04 NaN2000-01-05 NaN2000-01-06 2.3954892000-01-07 NaN2000-01-08 NaN2000-01-09 0.7336392000-01-10 NaNFreq: D, dtype: float64In [233]:ts2.reindex(ts.index,method="ffill")Out[233]:2000-01-03 0.1830512000-01-04 0.1830512000-01-05 0.1830512000-01-06 2.3954892000-01-07 2.3954892000-01-08 2.3954892000-01-09 0.7336392000-01-10 0.733639Freq: D, dtype: float64In [234]:ts2.reindex(ts.index,method="bfill")Out[234]:2000-01-03 0.1830512000-01-04 2.3954892000-01-05 2.3954892000-01-06 2.3954892000-01-07 0.7336392000-01-08 0.7336392000-01-09 0.7336392000-01-10 NaNFreq: D, dtype: float64In [235]:ts2.reindex(ts.index,method="nearest")Out[235]:2000-01-03 0.1830512000-01-04 0.1830512000-01-05 2.3954892000-01-06 2.3954892000-01-07 2.3954892000-01-08 0.7336392000-01-09 0.7336392000-01-10 0.733639Freq: D, dtype: float64
These methods require that the indexes areordered increasing ordecreasing.
Note that the same result could have been achieved usingffill (except formethod='nearest'
) orinterpolate:
In [236]:ts2.reindex(ts.index).ffill()Out[236]:2000-01-03 0.1830512000-01-04 0.1830512000-01-05 0.1830512000-01-06 2.3954892000-01-07 2.3954892000-01-08 2.3954892000-01-09 0.7336392000-01-10 0.733639Freq: D, dtype: float64
reindex()
will raise a ValueError if the index is not monotonicallyincreasing or decreasing.fillna()
andinterpolate()
will not perform any checks on the order of the index.
Limits on filling while reindexing#
Thelimit
andtolerance
arguments provide additional control overfilling while reindexing. Limit specifies the maximum count of consecutivematches:
In [237]:ts2.reindex(ts.index,method="ffill",limit=1)Out[237]:2000-01-03 0.1830512000-01-04 0.1830512000-01-05 NaN2000-01-06 2.3954892000-01-07 2.3954892000-01-08 NaN2000-01-09 0.7336392000-01-10 0.733639Freq: D, dtype: float64
In contrast, tolerance specifies the maximum distance between the index andindexer values:
In [238]:ts2.reindex(ts.index,method="ffill",tolerance="1 day")Out[238]:2000-01-03 0.1830512000-01-04 0.1830512000-01-05 NaN2000-01-06 2.3954892000-01-07 2.3954892000-01-08 NaN2000-01-09 0.7336392000-01-10 0.733639Freq: D, dtype: float64
Notice that when used on aDatetimeIndex
,TimedeltaIndex
orPeriodIndex
,tolerance
will coerced into aTimedelta
if possible.This allows you to specify tolerance with appropriate strings.
Dropping labels from an axis#
A method closely related toreindex
is thedrop()
function.It removes a set of labels from an axis:
In [239]:dfOut[239]: one two threea 1.394981 1.772517 NaNb 0.343054 1.912123 -0.050390c 0.695246 1.478369 1.227435d NaN 0.279344 -0.613172In [240]:df.drop(["a","d"],axis=0)Out[240]: one two threeb 0.343054 1.912123 -0.050390c 0.695246 1.478369 1.227435In [241]:df.drop(["one"],axis=1)Out[241]: two threea 1.772517 NaNb 1.912123 -0.050390c 1.478369 1.227435d 0.279344 -0.613172
Note that the following also works, but is a bit less obvious / clean:
In [242]:df.reindex(df.index.difference(["a","d"]))Out[242]: one two threeb 0.343054 1.912123 -0.050390c 0.695246 1.478369 1.227435
Renaming / mapping labels#
Therename()
method allows you to relabel an axis based on somemapping (a dict or Series) or an arbitrary function.
In [243]:sOut[243]:a -0.186646b -1.692424c -0.303893d -1.425662e 1.114285dtype: float64In [244]:s.rename(str.upper)Out[244]:A -0.186646B -1.692424C -0.303893D -1.425662E 1.114285dtype: float64
If you pass a function, it must return a value when called with any of thelabels (and must produce a set of unique values). A dict orSeries can also be used:
In [245]:df.rename( .....:columns={"one":"foo","two":"bar"}, .....:index={"a":"apple","b":"banana","d":"durian"}, .....:) .....:Out[245]: foo bar threeapple 1.394981 1.772517 NaNbanana 0.343054 1.912123 -0.050390c 0.695246 1.478369 1.227435durian NaN 0.279344 -0.613172
If the mapping doesn’t include a column/index label, it isn’t renamed. Note thatextra labels in the mapping don’t throw an error.
DataFrame.rename()
also supports an “axis-style” calling convention, whereyou specify a singlemapper
and theaxis
to apply that mapping to.
In [246]:df.rename({"one":"foo","two":"bar"},axis="columns")Out[246]: foo bar threea 1.394981 1.772517 NaNb 0.343054 1.912123 -0.050390c 0.695246 1.478369 1.227435d NaN 0.279344 -0.613172In [247]:df.rename({"a":"apple","b":"banana","d":"durian"},axis="index")Out[247]: one two threeapple 1.394981 1.772517 NaNbanana 0.343054 1.912123 -0.050390c 0.695246 1.478369 1.227435durian NaN 0.279344 -0.613172
Finally,rename()
also accepts a scalar or list-likefor altering theSeries.name
attribute.
In [248]:s.rename("scalar-name")Out[248]:a -0.186646b -1.692424c -0.303893d -1.425662e 1.114285Name: scalar-name, dtype: float64
The methodsDataFrame.rename_axis()
andSeries.rename_axis()
allow specific names of aMultiIndex
to be changed (as opposed to thelabels).
In [249]:df=pd.DataFrame( .....:{"x":[1,2,3,4,5,6],"y":[10,20,30,40,50,60]}, .....:index=pd.MultiIndex.from_product( .....:[["a","b","c"],[1,2]],names=["let","num"] .....:), .....:) .....:In [250]:dfOut[250]: x ylet numa 1 1 10 2 2 20b 1 3 30 2 4 40c 1 5 50 2 6 60In [251]:df.rename_axis(index={"let":"abc"})Out[251]: x yabc numa 1 1 10 2 2 20b 1 3 30 2 4 40c 1 5 50 2 6 60In [252]:df.rename_axis(index=str.upper)Out[252]: x yLET NUMa 1 1 10 2 2 20b 1 3 30 2 4 40c 1 5 50 2 6 60
Iteration#
The behavior of basic iteration over pandas objects depends on the type.When iterating over a Series, it is regarded as array-like, and basic iterationproduces the values. DataFrames follow the dict-like convention of iteratingover the “keys” of the objects.
In short, basic iteration (foriinobject
) produces:
Series: values
DataFrame: column labels
Thus, for example, iterating over a DataFrame gives you the column names:
In [253]:df=pd.DataFrame( .....:{"col1":np.random.randn(3),"col2":np.random.randn(3)},index=["a","b","c"] .....:) .....:In [254]:forcolindf: .....:print(col) .....:col1col2
pandas objects also have the dict-likeitems()
method toiterate over the (key, value) pairs.
To iterate over the rows of a DataFrame, you can use the following methods:
iterrows()
: Iterate over the rows of a DataFrame as (index, Series) pairs.This converts the rows to Series objects, which can change the dtypes and has someperformance implications.itertuples()
: Iterate over the rows of a DataFrameas namedtuples of the values. This is a lot faster thaniterrows()
, and is in most cases preferable to useto iterate over the values of a DataFrame.
Warning
Iterating through pandas objects is generallyslow. In many cases,iterating manually over the rows is not needed and can be avoided withone of the following approaches:
Look for avectorized solution: many operations can be performed usingbuilt-in methods or NumPy functions, (boolean) indexing, …
When you have a function that cannot work on the full DataFrame/Seriesat once, it is better to use
apply()
instead of iteratingover the values. See the docs onfunction application.If you need to do iterative manipulations on the values but performance isimportant, consider writing the inner loop with cython or numba.See theenhancing performance section for someexamples of this approach.
Warning
You shouldnever modify something you are iterating over.This is not guaranteed to work in all cases. Depending on thedata types, the iterator returns a copy and not a view, and writingto it will have no effect!
For example, in the following case setting the value has no effect:
In [255]:df=pd.DataFrame({"a":[1,2,3],"b":["a","b","c"]})In [256]:forindex,rowindf.iterrows(): .....:row["a"]=10 .....:In [257]:dfOut[257]: a b0 1 a1 2 b2 3 c
items#
Consistent with the dict-like interface,items()
iteratesthrough key-value pairs:
Series: (index, scalar value) pairs
DataFrame: (column, Series) pairs
For example:
In [258]:forlabel,serindf.items(): .....:print(label) .....:print(ser) .....:a0 11 22 3Name: a, dtype: int64b0 a1 b2 cName: b, dtype: object
iterrows#
iterrows()
allows you to iterate through the rows of aDataFrame as Series objects. It returns an iterator yielding eachindex value along with a Series containing the data in each row:
In [259]:forrow_index,rowindf.iterrows(): .....:print(row_index,row,sep="\n") .....:0a 1b aName: 0, dtype: object1a 2b bName: 1, dtype: object2a 3b cName: 2, dtype: object
Note
Becauseiterrows()
returns a Series for each row,it doesnot preserve dtypes across the rows (dtypes arepreserved across columns for DataFrames). For example,
In [260]:df_orig=pd.DataFrame([[1,1.5]],columns=["int","float"])In [261]:df_orig.dtypesOut[261]:int int64float float64dtype: objectIn [262]:row=next(df_orig.iterrows())[1]In [263]:rowOut[263]:int 1.0float 1.5Name: 0, dtype: float64
All values inrow
, returned as a Series, are now upcastedto floats, also the original integer value in columnx
:
In [264]:row["int"].dtypeOut[264]:dtype('float64')In [265]:df_orig["int"].dtypeOut[265]:dtype('int64')
To preserve dtypes while iterating over the rows, it is betterto useitertuples()
which returns namedtuples of the valuesand which is generally much faster thaniterrows()
.
For instance, a contrived way to transpose the DataFrame would be:
In [266]:df2=pd.DataFrame({"x":[1,2,3],"y":[4,5,6]})In [267]:print(df2) x y0 1 41 2 52 3 6In [268]:print(df2.T) 0 1 2x 1 2 3y 4 5 6In [269]:df2_t=pd.DataFrame({idx:valuesforidx,valuesindf2.iterrows()})In [270]:print(df2_t) 0 1 2x 1 2 3y 4 5 6
itertuples#
Theitertuples()
method will return an iteratoryielding a namedtuple for each row in the DataFrame. The first elementof the tuple will be the row’s corresponding index value, while theremaining values are the row values.
For instance:
In [271]:forrowindf.itertuples(): .....:print(row) .....:Pandas(Index=0, a=1, b='a')Pandas(Index=1, a=2, b='b')Pandas(Index=2, a=3, b='c')
This method does not convert the row to a Series object; it merelyreturns the values inside a namedtuple. Therefore,itertuples()
preserves the data type of the valuesand is generally faster asiterrows()
.
Note
The column names will be renamed to positional names if they areinvalid Python identifiers, repeated, or start with an underscore.With a large number of columns (>255), regular tuples are returned.
.dt accessor#
Series
has an accessor to succinctly return datetime like properties for thevalues of the Series, if it is a datetime/period like Series.This will return a Series, indexed like the existing Series.
# datetimeIn [272]:s=pd.Series(pd.date_range("20130101 09:10:12",periods=4))In [273]:sOut[273]:0 2013-01-01 09:10:121 2013-01-02 09:10:122 2013-01-03 09:10:123 2013-01-04 09:10:12dtype: datetime64[ns]In [274]:s.dt.hourOut[274]:0 91 92 93 9dtype: int32In [275]:s.dt.secondOut[275]:0 121 122 123 12dtype: int32In [276]:s.dt.dayOut[276]:0 11 22 33 4dtype: int32
This enables nice expressions like this:
In [277]:s[s.dt.day==2]Out[277]:1 2013-01-02 09:10:12dtype: datetime64[ns]
You can easily produces tz aware transformations:
In [278]:stz=s.dt.tz_localize("US/Eastern")In [279]:stzOut[279]:0 2013-01-01 09:10:12-05:001 2013-01-02 09:10:12-05:002 2013-01-03 09:10:12-05:003 2013-01-04 09:10:12-05:00dtype: datetime64[ns, US/Eastern]In [280]:stz.dt.tzOut[280]:<DstTzInfo 'US/Eastern' LMT-1 day, 19:04:00 STD>
You can also chain these types of operations:
In [281]:s.dt.tz_localize("UTC").dt.tz_convert("US/Eastern")Out[281]:0 2013-01-01 04:10:12-05:001 2013-01-02 04:10:12-05:002 2013-01-03 04:10:12-05:003 2013-01-04 04:10:12-05:00dtype: datetime64[ns, US/Eastern]
You can also format datetime values as strings withSeries.dt.strftime()
whichsupports the same format as the standardstrftime()
.
# DatetimeIndexIn [282]:s=pd.Series(pd.date_range("20130101",periods=4))In [283]:sOut[283]:0 2013-01-011 2013-01-022 2013-01-033 2013-01-04dtype: datetime64[ns]In [284]:s.dt.strftime("%Y/%m/%d")Out[284]:0 2013/01/011 2013/01/022 2013/01/033 2013/01/04dtype: object
# PeriodIndexIn [285]:s=pd.Series(pd.period_range("20130101",periods=4))In [286]:sOut[286]:0 2013-01-011 2013-01-022 2013-01-033 2013-01-04dtype: period[D]In [287]:s.dt.strftime("%Y/%m/%d")Out[287]:0 2013/01/011 2013/01/022 2013/01/033 2013/01/04dtype: object
The.dt
accessor works for period and timedelta dtypes.
# periodIn [288]:s=pd.Series(pd.period_range("20130101",periods=4,freq="D"))In [289]:sOut[289]:0 2013-01-011 2013-01-022 2013-01-033 2013-01-04dtype: period[D]In [290]:s.dt.yearOut[290]:0 20131 20132 20133 2013dtype: int64In [291]:s.dt.dayOut[291]:0 11 22 33 4dtype: int64
# timedeltaIn [292]:s=pd.Series(pd.timedelta_range("1 day 00:00:05",periods=4,freq="s"))In [293]:sOut[293]:0 1 days 00:00:051 1 days 00:00:062 1 days 00:00:073 1 days 00:00:08dtype: timedelta64[ns]In [294]:s.dt.daysOut[294]:0 11 12 13 1dtype: int64In [295]:s.dt.secondsOut[295]:0 51 62 73 8dtype: int32In [296]:s.dt.componentsOut[296]: days hours minutes seconds milliseconds microseconds nanoseconds0 1 0 0 5 0 0 01 1 0 0 6 0 0 02 1 0 0 7 0 0 03 1 0 0 8 0 0 0
Note
Series.dt
will raise aTypeError
if you access with a non-datetime-like values.
Vectorized string methods#
Series is equipped with a set of string processing methods that make it easy tooperate on each element of the array. Perhaps most importantly, these methodsexclude missing/NA values automatically. These are accessed via the Series’sstr
attribute and generally have names matching the equivalent (scalar)built-in string methods. For example:
In [297]:s=pd.Series( .....:["A","B","C","Aaba","Baca",np.nan,"CABA","dog","cat"],dtype="string" .....:) .....:In [298]:s.str.lower()Out[298]:0 a1 b2 c3 aaba4 baca5 <NA>6 caba7 dog8 catdtype: string
Powerful pattern-matching methods are provided as well, but note thatpattern-matching generally usesregular expressions by default (and in some casesalways uses them).
Note
Prior to pandas 1.0, string methods were only available onobject
-dtypeSeries
. pandas 1.0 added theStringDtype
which is dedicatedto strings. SeeText data types for more.
Please seeVectorized String Methods for a completedescription.
Sorting#
pandas supports three kinds of sorting: sorting by index labels,sorting by column values, and sorting by a combination of both.
By index#
TheSeries.sort_index()
andDataFrame.sort_index()
methods areused to sort a pandas object by its index levels.
In [299]:df=pd.DataFrame( .....:{ .....:"one":pd.Series(np.random.randn(3),index=["a","b","c"]), .....:"two":pd.Series(np.random.randn(4),index=["a","b","c","d"]), .....:"three":pd.Series(np.random.randn(3),index=["b","c","d"]), .....:} .....:) .....:In [300]:unsorted_df=df.reindex( .....:index=["a","d","c","b"],columns=["three","two","one"] .....:) .....:In [301]:unsorted_dfOut[301]: three two onea NaN -1.152244 0.562973d -0.252916 -0.109597 NaNc 1.273388 -0.167123 0.640382b -0.098217 0.009797 -1.299504# DataFrameIn [302]:unsorted_df.sort_index()Out[302]: three two onea NaN -1.152244 0.562973b -0.098217 0.009797 -1.299504c 1.273388 -0.167123 0.640382d -0.252916 -0.109597 NaNIn [303]:unsorted_df.sort_index(ascending=False)Out[303]: three two oned -0.252916 -0.109597 NaNc 1.273388 -0.167123 0.640382b -0.098217 0.009797 -1.299504a NaN -1.152244 0.562973In [304]:unsorted_df.sort_index(axis=1)Out[304]: one three twoa 0.562973 NaN -1.152244d NaN -0.252916 -0.109597c 0.640382 1.273388 -0.167123b -1.299504 -0.098217 0.009797# SeriesIn [305]:unsorted_df["three"].sort_index()Out[305]:a NaNb -0.098217c 1.273388d -0.252916Name: three, dtype: float64
Sorting by index also supports akey
parameter that takes a callablefunction to apply to the index being sorted. ForMultiIndex
objects,the key is applied per-level to the levels specified bylevel
.
In [306]:s1=pd.DataFrame({"a":["B","a","C"],"b":[1,2,3],"c":[2,3,4]}).set_index( .....:list("ab") .....:) .....:In [307]:s1Out[307]: ca bB 1 2a 2 3C 3 4
In [308]:s1.sort_index(level="a")Out[308]: ca bB 1 2C 3 4a 2 3In [309]:s1.sort_index(level="a",key=lambdaidx:idx.str.lower())Out[309]: ca ba 2 3B 1 2C 3 4
For information on key sorting by value, seevalue sorting.
By values#
TheSeries.sort_values()
method is used to sort aSeries
by its values. TheDataFrame.sort_values()
method is used to sort aDataFrame
by its column or row values.The optionalby
parameter toDataFrame.sort_values()
may used to specify one or more columnsto use to determine the sorted order.
In [310]:df1=pd.DataFrame( .....:{"one":[2,1,1,1],"two":[1,3,2,4],"three":[5,4,3,2]} .....:) .....:In [311]:df1.sort_values(by="two")Out[311]: one two three0 2 1 52 1 2 31 1 3 43 1 4 2
Theby
parameter can take a list of column names, e.g.:
In [312]:df1[["one","two","three"]].sort_values(by=["one","two"])Out[312]: one two three2 1 2 31 1 3 43 1 4 20 2 1 5
These methods have special treatment of NA values via thena_position
argument:
In [313]:s[2]=np.nanIn [314]:s.sort_values()Out[314]:0 A3 Aaba1 B4 Baca6 CABA8 cat7 dog2 <NA>5 <NA>dtype: stringIn [315]:s.sort_values(na_position="first")Out[315]:2 <NA>5 <NA>0 A3 Aaba1 B4 Baca6 CABA8 cat7 dogdtype: string
Sorting also supports akey
parameter that takes a callable functionto apply to the values being sorted.
In [316]:s1=pd.Series(["B","a","C"])
In [317]:s1.sort_values()Out[317]:0 B2 C1 adtype: objectIn [318]:s1.sort_values(key=lambdax:x.str.lower())Out[318]:1 a0 B2 Cdtype: object
key
will be given theSeries
of values and should return aSeries
or array of the same shape with the transformed values. ForDataFrame
objects,the key is applied per column, so the key should still expect a Series and returna Series, e.g.
In [319]:df=pd.DataFrame({"a":["B","a","C"],"b":[1,2,3]})
In [320]:df.sort_values(by="a")Out[320]: a b0 B 12 C 31 a 2In [321]:df.sort_values(by="a",key=lambdacol:col.str.lower())Out[321]: a b1 a 20 B 12 C 3
The name or type of each column can be used to apply different functions todifferent columns.
By indexes and values#
Strings passed as theby
parameter toDataFrame.sort_values()
mayrefer to either columns or index level names.
# Build MultiIndexIn [322]:idx=pd.MultiIndex.from_tuples( .....:[("a",1),("a",2),("a",2),("b",2),("b",1),("b",1)] .....:) .....:In [323]:idx.names=["first","second"]# Build DataFrameIn [324]:df_multi=pd.DataFrame({"A":np.arange(6,0,-1)},index=idx)In [325]:df_multiOut[325]: Afirst seconda 1 6 2 5 2 4b 2 3 1 2 1 1
Sort by ‘second’ (index) and ‘A’ (column)
In [326]:df_multi.sort_values(by=["second","A"])Out[326]: Afirst secondb 1 1 1 2a 1 6b 2 3a 2 4 2 5
Note
If a string matches both a column name and an index level name then awarning is issued and the column takes precedence. This will result in anambiguity error in a future version.
searchsorted#
Series has thesearchsorted()
method, which works similarly tonumpy.ndarray.searchsorted()
.
In [327]:ser=pd.Series([1,2,3])In [328]:ser.searchsorted([0,3])Out[328]:array([0, 2])In [329]:ser.searchsorted([0,4])Out[329]:array([0, 3])In [330]:ser.searchsorted([1,3],side="right")Out[330]:array([1, 3])In [331]:ser.searchsorted([1,3],side="left")Out[331]:array([0, 2])In [332]:ser=pd.Series([3,1,2])In [333]:ser.searchsorted([0,3],sorter=np.argsort(ser))Out[333]:array([0, 2])
smallest / largest values#
Series
has thensmallest()
andnlargest()
methods which return thesmallest or largest\(n\) values. For a largeSeries
this can be muchfaster than sorting the entire Series and callinghead(n)
on the result.
In [334]:s=pd.Series(np.random.permutation(10))In [335]:sOut[335]:0 21 02 33 74 15 56 97 68 89 4dtype: int64In [336]:s.sort_values()Out[336]:1 04 10 22 39 45 57 63 78 86 9dtype: int64In [337]:s.nsmallest(3)Out[337]:1 04 10 2dtype: int64In [338]:s.nlargest(3)Out[338]:6 98 83 7dtype: int64
DataFrame
also has thenlargest
andnsmallest
methods.
In [339]:df=pd.DataFrame( .....:{ .....:"a":[-2,-1,1,10,8,11,-1], .....:"b":list("abdceff"), .....:"c":[1.0,2.0,4.0,3.2,np.nan,3.0,4.0], .....:} .....:) .....:In [340]:df.nlargest(3,"a")Out[340]: a b c5 11 f 3.03 10 c 3.24 8 e NaNIn [341]:df.nlargest(5,["a","c"])Out[341]: a b c5 11 f 3.03 10 c 3.24 8 e NaN2 1 d 4.06 -1 f 4.0In [342]:df.nsmallest(3,"a")Out[342]: a b c0 -2 a 1.01 -1 b 2.06 -1 f 4.0In [343]:df.nsmallest(5,["a","c"])Out[343]: a b c0 -2 a 1.01 -1 b 2.06 -1 f 4.02 1 d 4.04 8 e NaN
Sorting by a MultiIndex column#
You must be explicit about sorting when the column is a MultiIndex, and fully specifyall levels toby
.
In [344]:df1.columns=pd.MultiIndex.from_tuples( .....:[("a","one"),("a","two"),("b","three")] .....:) .....:In [345]:df1.sort_values(by=("a","two"))Out[345]: a b one two three0 2 1 52 1 2 31 1 3 43 1 4 2
Copying#
Thecopy()
method on pandas objects copies the underlying data (though notthe axis indexes, since they are immutable) and returns a new object. Note thatit is seldom necessary to copy objects. For example, there are only ahandful of ways to alter a DataFramein-place:
Inserting, deleting, or modifying a column.
Assigning to the
index
orcolumns
attributes.For homogeneous data, directly modifying the values via the
values
attribute or advanced indexing.
To be clear, no pandas method has the side effect of modifying your data;almost every method returns a new object, leaving the original objectuntouched. If the data is modified, it is because you did so explicitly.
dtypes#
For the most part, pandas uses NumPy arrays and dtypes for Series or individualcolumns of a DataFrame. NumPy provides support forfloat
,int
,bool
,timedelta64[ns]
anddatetime64[ns]
(note that NumPydoes not support timezone-aware datetimes).
pandas and third-party librariesextend NumPy’s type system in a few places.This section describes the extensions pandas has made internally.SeeExtension types for how to write your own extension thatworks with pandas. Seethe ecosystem page for a list of third-partylibraries that have implemented an extension.
The following table lists all of pandas extension types. For methods requiringdtype
arguments, strings can be specified as indicated. See the respectivedocumentation sections for more on each type.
Kind of Data | Data Type | Scalar | Array | String Aliases | ||
---|---|---|---|---|---|---|
| ||||||
(none) |
| |||||
|
| |||||
(none) |
| |||||
| ||||||
| (none) |
| ||||
| (none) |
| ||||
| ||||||
|
pandas has two ways to store strings.
object
dtype, which can hold any Python object, including strings.StringDtype
, which is dedicated to strings.
Generally, we recommend usingStringDtype
. SeeText data types for more.
Finally, arbitrary objects may be stored using theobject
dtype, but shouldbe avoided to the extent possible (for performance and interoperability withother libraries and methods. Seeobject conversion).
A convenientdtypes
attribute for DataFrame returns a Serieswith the data type of each column.
In [346]:dft=pd.DataFrame( .....:{ .....:"A":np.random.rand(3), .....:"B":1, .....:"C":"foo", .....:"D":pd.Timestamp("20010102"), .....:"E":pd.Series([1.0]*3).astype("float32"), .....:"F":False, .....:"G":pd.Series([1]*3,dtype="int8"), .....:} .....:) .....:In [347]:dftOut[347]: A B C D E F G0 0.035962 1 foo 2001-01-02 1.0 False 11 0.701379 1 foo 2001-01-02 1.0 False 12 0.281885 1 foo 2001-01-02 1.0 False 1In [348]:dft.dtypesOut[348]:A float64B int64C objectD datetime64[s]E float32F boolG int8dtype: object
On aSeries
object, use thedtype
attribute.
In [349]:dft["A"].dtypeOut[349]:dtype('float64')
If a pandas object contains data with multiple dtypesin a single column, thedtype of the column will be chosen to accommodate all of the data types(object
is the most general).
# these ints are coerced to floatsIn [350]:pd.Series([1,2,3,4,5,6.0])Out[350]:0 1.01 2.02 3.03 4.04 5.05 6.0dtype: float64# string data forces an ``object`` dtypeIn [351]:pd.Series([1,2,3,6.0,"foo"])Out[351]:0 11 22 33 6.04 foodtype: object
The number of columns of each type in aDataFrame
can be found by callingDataFrame.dtypes.value_counts()
.
In [352]:dft.dtypes.value_counts()Out[352]:float64 1int64 1object 1datetime64[s] 1float32 1bool 1int8 1Name: count, dtype: int64
Numeric dtypes will propagate and can coexist in DataFrames.If a dtype is passed (either directly via thedtype
keyword, a passedndarray
,or a passedSeries
), then it will be preserved in DataFrame operations. Furthermore,different numeric dtypes willNOT be combined. The following example will give you a taste.
In [353]:df1=pd.DataFrame(np.random.randn(8,1),columns=["A"],dtype="float32")In [354]:df1Out[354]: A0 0.2243641 1.8905462 0.1828793 0.7878474 -0.1884495 0.6677156 -0.0117367 -0.399073In [355]:df1.dtypesOut[355]:A float32dtype: objectIn [356]:df2=pd.DataFrame( .....:{ .....:"A":pd.Series(np.random.randn(8),dtype="float16"), .....:"B":pd.Series(np.random.randn(8)), .....:"C":pd.Series(np.random.randint(0,255,size=8),dtype="uint8"),# [0,255] (range of uint8) .....:} .....:) .....:In [357]:df2Out[357]: A B C0 0.823242 0.256090 261 1.607422 1.426469 862 -0.333740 -0.416203 463 -0.063477 1.139976 2124 -1.014648 -1.193477 265 0.678711 0.096706 76 -0.040863 -1.956850 1847 -0.357422 -0.714337 206In [358]:df2.dtypesOut[358]:A float16B float64C uint8dtype: object
defaults#
By default integer types areint64
and float types arefloat64
,regardless of platform (32-bit or 64-bit).The following will all result inint64
dtypes.
In [359]:pd.DataFrame([1,2],columns=["a"]).dtypesOut[359]:a int64dtype: objectIn [360]:pd.DataFrame({"a":[1,2]}).dtypesOut[360]:a int64dtype: objectIn [361]:pd.DataFrame({"a":1},index=list(range(2))).dtypesOut[361]:a int64dtype: object
Note that Numpy will chooseplatform-dependent types when creating arrays.The followingWILL result inint32
on 32-bit platform.
In [362]:frame=pd.DataFrame(np.array([1,2]))
upcasting#
Types can potentially beupcasted when combined with other types, meaning they are promotedfrom the current type (e.g.int
tofloat
).
In [363]:df3=df1.reindex_like(df2).fillna(value=0.0)+df2In [364]:df3Out[364]: A B C0 1.047606 0.256090 26.01 3.497968 1.426469 86.02 -0.150862 -0.416203 46.03 0.724370 1.139976 212.04 -1.203098 -1.193477 26.05 1.346426 0.096706 7.06 -0.052599 -1.956850 184.07 -0.756495 -0.714337 206.0In [365]:df3.dtypesOut[365]:A float32B float64C float64dtype: object
DataFrame.to_numpy()
will return thelower-common-denominator of the dtypes, meaningthe dtype that can accommodateALL of the types in the resulting homogeneous dtyped NumPy array. This canforce someupcasting.
In [366]:df3.to_numpy().dtypeOut[366]:dtype('float64')
astype#
You can use theastype()
method to explicitly convert dtypes from one to another. These will by default return a copy,even if the dtype was unchanged (passcopy=False
to change this behavior). In addition, they will raise anexception if the astype operation is invalid.
Upcasting is always according to theNumPy rules. If two different dtypes are involved in an operation,then the moregeneral one will be used as the result of the operation.
In [367]:df3Out[367]: A B C0 1.047606 0.256090 26.01 3.497968 1.426469 86.02 -0.150862 -0.416203 46.03 0.724370 1.139976 212.04 -1.203098 -1.193477 26.05 1.346426 0.096706 7.06 -0.052599 -1.956850 184.07 -0.756495 -0.714337 206.0In [368]:df3.dtypesOut[368]:A float32B float64C float64dtype: object# conversion of dtypesIn [369]:df3.astype("float32").dtypesOut[369]:A float32B float32C float32dtype: object
Convert a subset of columns to a specified type usingastype()
.
In [370]:dft=pd.DataFrame({"a":[1,2,3],"b":[4,5,6],"c":[7,8,9]})In [371]:dft[["a","b"]]=dft[["a","b"]].astype(np.uint8)In [372]:dftOut[372]: a b c0 1 4 71 2 5 82 3 6 9In [373]:dft.dtypesOut[373]:a uint8b uint8c int64dtype: object
Convert certain columns to a specific dtype by passing a dict toastype()
.
In [374]:dft1=pd.DataFrame({"a":[1,0,1],"b":[4,5,6],"c":[7,8,9]})In [375]:dft1=dft1.astype({"a":np.bool_,"c":np.float64})In [376]:dft1Out[376]: a b c0 True 4 7.01 False 5 8.02 True 6 9.0In [377]:dft1.dtypesOut[377]:a boolb int64c float64dtype: object
Note
When trying to convert a subset of columns to a specified type usingastype()
andloc()
, upcasting occurs.
loc()
tries to fit in what we are assigning to the current dtypes, while[]
will overwrite them taking the dtype from the right hand side. Therefore the following piece of code produces the unintended result.
In [378]:dft=pd.DataFrame({"a":[1,2,3],"b":[4,5,6],"c":[7,8,9]})In [379]:dft.loc[:,["a","b"]].astype(np.uint8).dtypesOut[379]:a uint8b uint8dtype: objectIn [380]:dft.loc[:,["a","b"]]=dft.loc[:,["a","b"]].astype(np.uint8)In [381]:dft.dtypesOut[381]:a int64b int64c int64dtype: object
object conversion#
pandas offers various functions to try to force conversion of types from theobject
dtype to other types.In cases where the data is already of the correct type, but stored in anobject
array, theDataFrame.infer_objects()
andSeries.infer_objects()
methods can be used to soft convertto the correct type.
In [382]:importdatetimeIn [383]:df=pd.DataFrame( .....:[ .....:[1,2], .....:["a","b"], .....:[datetime.datetime(2016,3,2),datetime.datetime(2016,3,2)], .....:] .....:) .....:In [384]:df=df.TIn [385]:dfOut[385]: 0 1 20 1 a 2016-03-02 00:00:001 2 b 2016-03-02 00:00:00In [386]:df.dtypesOut[386]:0 object1 object2 objectdtype: object
Because the data was transposed the original inference stored all columns as object, whichinfer_objects
will correct.
In [387]:df.infer_objects().dtypesOut[387]:0 int641 object2 datetime64[ns]dtype: object
The following functions are available for one dimensional object arrays or scalars to performhard conversion of objects to a specified type:
to_numeric()
(conversion to numeric dtypes)In [388]:m=["1.1",2,3]In [389]:pd.to_numeric(m)Out[389]:array([1.1, 2. , 3. ])
to_datetime()
(conversion to datetime objects)In [390]:importdatetimeIn [391]:m=["2016-07-09",datetime.datetime(2016,3,2)]In [392]:pd.to_datetime(m)Out[392]:DatetimeIndex(['2016-07-09', '2016-03-02'], dtype='datetime64[ns]', freq=None)
to_timedelta()
(conversion to timedelta objects)In [393]:m=["5us",pd.Timedelta("1day")]In [394]:pd.to_timedelta(m)Out[394]:TimedeltaIndex(['0 days 00:00:00.000005', '1 days 00:00:00'], dtype='timedelta64[ns]', freq=None)
To force a conversion, we can pass in anerrors
argument, which specifies how pandas should deal with elementsthat cannot be converted to desired dtype or object. By default,errors='raise'
, meaning that any errors encounteredwill be raised during the conversion process. However, iferrors='coerce'
, these errors will be ignored and pandaswill convert problematic elements topd.NaT
(for datetime and timedelta) ornp.nan
(for numeric). This might beuseful if you are reading in data which is mostly of the desired dtype (e.g. numeric, datetime), but occasionally hasnon-conforming elements intermixed that you want to represent as missing:
In [395]:importdatetimeIn [396]:m=["apple",datetime.datetime(2016,3,2)]In [397]:pd.to_datetime(m,errors="coerce")Out[397]:DatetimeIndex(['NaT', '2016-03-02'], dtype='datetime64[ns]', freq=None)In [398]:m=["apple",2,3]In [399]:pd.to_numeric(m,errors="coerce")Out[399]:array([nan, 2., 3.])In [400]:m=["apple",pd.Timedelta("1day")]In [401]:pd.to_timedelta(m,errors="coerce")Out[401]:TimedeltaIndex([NaT, '1 days'], dtype='timedelta64[ns]', freq=None)
In addition to object conversion,to_numeric()
provides another argumentdowncast
, which gives theoption of downcasting the newly (or already) numeric data to a smaller dtype, which can conserve memory:
In [402]:m=["1",2,3]In [403]:pd.to_numeric(m,downcast="integer")# smallest signed int dtypeOut[403]:array([1, 2, 3], dtype=int8)In [404]:pd.to_numeric(m,downcast="signed")# same as 'integer'Out[404]:array([1, 2, 3], dtype=int8)In [405]:pd.to_numeric(m,downcast="unsigned")# smallest unsigned int dtypeOut[405]:array([1, 2, 3], dtype=uint8)In [406]:pd.to_numeric(m,downcast="float")# smallest float dtypeOut[406]:array([1., 2., 3.], dtype=float32)
As these methods apply only to one-dimensional arrays, lists or scalars; they cannot be used directly on multi-dimensional objects suchas DataFrames. However, withapply()
, we can “apply” the function over each column efficiently:
In [407]:importdatetimeIn [408]:df=pd.DataFrame([["2016-07-09",datetime.datetime(2016,3,2)]]*2,dtype="O")In [409]:dfOut[409]: 0 10 2016-07-09 2016-03-02 00:00:001 2016-07-09 2016-03-02 00:00:00In [410]:df.apply(pd.to_datetime)Out[410]: 0 10 2016-07-09 2016-03-021 2016-07-09 2016-03-02In [411]:df=pd.DataFrame([["1.1",2,3]]*2,dtype="O")In [412]:dfOut[412]: 0 1 20 1.1 2 31 1.1 2 3In [413]:df.apply(pd.to_numeric)Out[413]: 0 1 20 1.1 2 31 1.1 2 3In [414]:df=pd.DataFrame([["5us",pd.Timedelta("1day")]]*2,dtype="O")In [415]:dfOut[415]: 0 10 5us 1 days 00:00:001 5us 1 days 00:00:00In [416]:df.apply(pd.to_timedelta)Out[416]: 0 10 0 days 00:00:00.000005 1 days1 0 days 00:00:00.000005 1 days
gotchas#
Performing selection operations oninteger
type data can easily upcast the data tofloating
.The dtype of the input data will be preserved in cases wherenans
are not introduced.See alsoSupport for integer NA.
In [417]:dfi=df3.astype("int32")In [418]:dfi["E"]=1In [419]:dfiOut[419]: A B C E0 1 0 26 11 3 1 86 12 0 0 46 13 0 1 212 14 -1 -1 26 15 1 0 7 16 0 -1 184 17 0 0 206 1In [420]:dfi.dtypesOut[420]:A int32B int32C int32E int64dtype: objectIn [421]:casted=dfi[dfi>0]In [422]:castedOut[422]: A B C E0 1.0 NaN 26 11 3.0 1.0 86 12 NaN NaN 46 13 NaN 1.0 212 14 NaN NaN 26 15 1.0 NaN 7 16 NaN NaN 184 17 NaN NaN 206 1In [423]:casted.dtypesOut[423]:A float64B float64C int32E int64dtype: object
While float dtypes are unchanged.
In [424]:dfa=df3.copy()In [425]:dfa["A"]=dfa["A"].astype("float32")In [426]:dfa.dtypesOut[426]:A float32B float64C float64dtype: objectIn [427]:casted=dfa[df2>0]In [428]:castedOut[428]: A B C0 1.047606 0.256090 26.01 3.497968 1.426469 86.02 NaN NaN 46.03 NaN 1.139976 212.04 NaN NaN 26.05 1.346426 0.096706 7.06 NaN NaN 184.07 NaN NaN 206.0In [429]:casted.dtypesOut[429]:A float32B float64C float64dtype: object
Selecting columns based ondtype
#
Theselect_dtypes()
method implements subsetting of columnsbased on theirdtype
.
First, let’s create aDataFrame
with a slew of differentdtypes:
In [430]:df=pd.DataFrame( .....:{ .....:"string":list("abc"), .....:"int64":list(range(1,4)), .....:"uint8":np.arange(3,6).astype("u1"), .....:"float64":np.arange(4.0,7.0), .....:"bool1":[True,False,True], .....:"bool2":[False,True,False], .....:"dates":pd.date_range("now",periods=3), .....:"category":pd.Series(list("ABC")).astype("category"), .....:} .....:) .....:In [431]:df["tdeltas"]=df.dates.diff()In [432]:df["uint64"]=np.arange(3,6).astype("u8")In [433]:df["other_dates"]=pd.date_range("20130101",periods=3)In [434]:df["tz_aware_dates"]=pd.date_range("20130101",periods=3,tz="US/Eastern")In [435]:dfOut[435]: string int64 uint8 ... uint64 other_dates tz_aware_dates0 a 1 3 ... 3 2013-01-01 2013-01-01 00:00:00-05:001 b 2 4 ... 4 2013-01-02 2013-01-02 00:00:00-05:002 c 3 5 ... 5 2013-01-03 2013-01-03 00:00:00-05:00[3 rows x 12 columns]
And the dtypes:
In [436]:df.dtypesOut[436]:string objectint64 int64uint8 uint8float64 float64bool1 boolbool2 booldates datetime64[ns]category categorytdeltas timedelta64[ns]uint64 uint64other_dates datetime64[ns]tz_aware_dates datetime64[ns, US/Eastern]dtype: object
select_dtypes()
has two parametersinclude
andexclude
that allow you tosay “give me the columnswith these dtypes” (include
) and/or “give thecolumnswithout these dtypes” (exclude
).
For example, to selectbool
columns:
In [437]:df.select_dtypes(include=[bool])Out[437]: bool1 bool20 True False1 False True2 True False
You can also pass the name of a dtype in theNumPy dtype hierarchy:
In [438]:df.select_dtypes(include=["bool"])Out[438]: bool1 bool20 True False1 False True2 True False
select_dtypes()
also works with generic dtypes as well.
For example, to select all numeric and boolean columns while excluding unsignedintegers:
In [439]:df.select_dtypes(include=["number","bool"],exclude=["unsignedinteger"])Out[439]: int64 float64 bool1 bool2 tdeltas0 1 4.0 True False NaT1 2 5.0 False True 1 days2 3 6.0 True False 1 days
To select string columns you must use theobject
dtype:
In [440]:df.select_dtypes(include=["object"])Out[440]: string0 a1 b2 c
To see all the child dtypes of a genericdtype
likenumpy.number
youcan define a function that returns a tree of child dtypes:
In [441]:defsubdtypes(dtype): .....:subs=dtype.__subclasses__() .....:ifnotsubs: .....:returndtype .....:return[dtype,[subdtypes(dt)fordtinsubs]] .....:
All NumPy dtypes are subclasses ofnumpy.generic
:
In [442]:subdtypes(np.generic)Out[442]:[numpy.generic, [[numpy.number, [[numpy.integer, [[numpy.signedinteger, [numpy.int8, numpy.int16, numpy.int32, numpy.int64, numpy.longlong, numpy.timedelta64]], [numpy.unsignedinteger, [numpy.uint8, numpy.uint16, numpy.uint32, numpy.uint64, numpy.ulonglong]]]], [numpy.inexact, [[numpy.floating, [numpy.float16, numpy.float32, numpy.float64, numpy.longdouble]], [numpy.complexfloating, [numpy.complex64, numpy.complex128, numpy.clongdouble]]]]]], [numpy.flexible, [[numpy.character, [numpy.bytes_, numpy.str_]], [numpy.void, [numpy.record]]]], numpy.bool_, numpy.datetime64, numpy.object_]]
Note
pandas also defines the typescategory
, anddatetime64[ns,tz]
, which are not integrated into the normalNumPy hierarchy and won’t show up with the above function.