Movatterモバイル変換


[0]ホーム

URL:


Skip to main content
Ctrl+K

Cookbook#

This is a repository forshort and sweet examples and links for useful pandas recipes.We encourage users to add to this documentation.

Adding interesting links and/or inline examples to this section is a greatFirst Pull Request.

Simplified, condensed, new-user friendly, in-line examples have been inserted where possible toaugment the Stack-Overflow and GitHub links. Many of the links contain expanded information,above what the in-line examples offer.

pandas (pd) and NumPy (np) are the only two abbreviated imported modules. The rest are keptexplicitly imported for newer users.

Idioms#

These are some neat pandasidioms

if-then/if-then-else on one column, and assignment to another one or more columns:

In [1]:df=pd.DataFrame(   ...:{"AAA":[4,5,6,7],"BBB":[10,20,30,40],"CCC":[100,50,-30,-50]}   ...:)   ...:In [2]:dfOut[2]:   AAA  BBB  CCC0    4   10  1001    5   20   502    6   30  -303    7   40  -50

if-then…#

An if-then on one column

In [3]:df.loc[df.AAA>=5,"BBB"]=-1In [4]:dfOut[4]:   AAA  BBB  CCC0    4   10  1001    5   -1   502    6   -1  -303    7   -1  -50

An if-then with assignment to 2 columns:

In [5]:df.loc[df.AAA>=5,["BBB","CCC"]]=555In [6]:dfOut[6]:   AAA  BBB  CCC0    4   10  1001    5  555  5552    6  555  5553    7  555  555

Add another line with different logic, to do the -else

In [7]:df.loc[df.AAA<5,["BBB","CCC"]]=2000In [8]:dfOut[8]:   AAA   BBB   CCC0    4  2000  20001    5   555   5552    6   555   5553    7   555   555

Or use pandas where after you’ve set up a mask

In [9]:df_mask=pd.DataFrame(   ...:{"AAA":[True]*4,"BBB":[False]*4,"CCC":[True,False]*2}   ...:)   ...:In [10]:df.where(df_mask,-1000)Out[10]:   AAA   BBB   CCC0    4 -1000  20001    5 -1000 -10002    6 -1000   5553    7 -1000 -1000

if-then-else using NumPy’s where()

In [11]:df=pd.DataFrame(   ....:{"AAA":[4,5,6,7],"BBB":[10,20,30,40],"CCC":[100,50,-30,-50]}   ....:)   ....:In [12]:dfOut[12]:   AAA  BBB  CCC0    4   10  1001    5   20   502    6   30  -303    7   40  -50In [13]:df["logic"]=np.where(df["AAA"]>5,"high","low")In [14]:dfOut[14]:   AAA  BBB  CCC logic0    4   10  100   low1    5   20   50   low2    6   30  -30  high3    7   40  -50  high

Splitting#

Split a frame with a boolean criterion

In [15]:df=pd.DataFrame(   ....:{"AAA":[4,5,6,7],"BBB":[10,20,30,40],"CCC":[100,50,-30,-50]}   ....:)   ....:In [16]:dfOut[16]:   AAA  BBB  CCC0    4   10  1001    5   20   502    6   30  -303    7   40  -50In [17]:df[df.AAA<=5]Out[17]:   AAA  BBB  CCC0    4   10  1001    5   20   50In [18]:df[df.AAA>5]Out[18]:   AAA  BBB  CCC2    6   30  -303    7   40  -50

Building criteria#

Select with multi-column criteria

In [19]:df=pd.DataFrame(   ....:{"AAA":[4,5,6,7],"BBB":[10,20,30,40],"CCC":[100,50,-30,-50]}   ....:)   ....:In [20]:dfOut[20]:   AAA  BBB  CCC0    4   10  1001    5   20   502    6   30  -303    7   40  -50

…and (without assignment returns a Series)

In [21]:df.loc[(df["BBB"]<25)&(df["CCC"]>=-40),"AAA"]Out[21]:0    41    5Name: AAA, dtype: int64

…or (without assignment returns a Series)

In [22]:df.loc[(df["BBB"]>25)|(df["CCC"]>=-40),"AAA"]Out[22]:0    41    52    63    7Name: AAA, dtype: int64

…or (with assignment modifies the DataFrame.)

In [23]:df.loc[(df["BBB"]>25)|(df["CCC"]>=75),"AAA"]=999In [24]:dfOut[24]:   AAA  BBB  CCC0  999   10  1001    5   20   502  999   30  -303  999   40  -50

Select rows with data closest to certain value using argsort

In [25]:df=pd.DataFrame(   ....:{"AAA":[4,5,6,7],"BBB":[10,20,30,40],"CCC":[100,50,-30,-50]}   ....:)   ....:In [26]:dfOut[26]:   AAA  BBB  CCC0    4   10  1001    5   20   502    6   30  -303    7   40  -50In [27]:aValue=43.0In [28]:df.loc[(df.CCC-aValue).abs().argsort()]Out[28]:   AAA  BBB  CCC1    5   20   500    4   10  1002    6   30  -303    7   40  -50

Dynamically reduce a list of criteria using a binary operators

In [29]:df=pd.DataFrame(   ....:{"AAA":[4,5,6,7],"BBB":[10,20,30,40],"CCC":[100,50,-30,-50]}   ....:)   ....:In [30]:dfOut[30]:   AAA  BBB  CCC0    4   10  1001    5   20   502    6   30  -303    7   40  -50In [31]:Crit1=df.AAA<=5.5In [32]:Crit2=df.BBB==10.0In [33]:Crit3=df.CCC>-40.0

One could hard code:

In [34]:AllCrit=Crit1&Crit2&Crit3

…Or it can be done with a list of dynamically built criteria

In [35]:importfunctoolsIn [36]:CritList=[Crit1,Crit2,Crit3]In [37]:AllCrit=functools.reduce(lambdax,y:x&y,CritList)In [38]:df[AllCrit]Out[38]:   AAA  BBB  CCC0    4   10  100

Selection#

Dataframes#

Theindexing docs.

Using both row labels and value conditionals

In [39]:df=pd.DataFrame(   ....:{"AAA":[4,5,6,7],"BBB":[10,20,30,40],"CCC":[100,50,-30,-50]}   ....:)   ....:In [40]:dfOut[40]:   AAA  BBB  CCC0    4   10  1001    5   20   502    6   30  -303    7   40  -50In [41]:df[(df.AAA<=6)&(df.index.isin([0,2,4]))]Out[41]:   AAA  BBB  CCC0    4   10  1002    6   30  -30

Use loc for label-oriented slicing and iloc positional slicingGH 2904

In [42]:df=pd.DataFrame(   ....:{"AAA":[4,5,6,7],"BBB":[10,20,30,40],"CCC":[100,50,-30,-50]},   ....:index=["foo","bar","boo","kar"],   ....:)   ....:

There are 2 explicit slicing methods, with a third general case

  1. Positional-oriented (Python slicing style : exclusive of end)

  2. Label-oriented (Non-Python slicing style : inclusive of end)

  3. General (Either slicing style : depends on if the slice contains labels or positions)

In [43]:df.loc["bar":"kar"]# LabelOut[43]:     AAA  BBB  CCCbar    5   20   50boo    6   30  -30kar    7   40  -50# GenericIn [44]:df[0:3]Out[44]:     AAA  BBB  CCCfoo    4   10  100bar    5   20   50boo    6   30  -30In [45]:df["bar":"kar"]Out[45]:     AAA  BBB  CCCbar    5   20   50boo    6   30  -30kar    7   40  -50

Ambiguity arises when an index consists of integers with a non-zero start or non-unit increment.

In [46]:data={"AAA":[4,5,6,7],"BBB":[10,20,30,40],"CCC":[100,50,-30,-50]}In [47]:df2=pd.DataFrame(data=data,index=[1,2,3,4])# Note index starts at 1.In [48]:df2.iloc[1:3]# Position-orientedOut[48]:   AAA  BBB  CCC2    5   20   503    6   30  -30In [49]:df2.loc[1:3]# Label-orientedOut[49]:   AAA  BBB  CCC1    4   10  1002    5   20   503    6   30  -30

Using inverse operator (~) to take the complement of a mask

In [50]:df=pd.DataFrame(   ....:{"AAA":[4,5,6,7],"BBB":[10,20,30,40],"CCC":[100,50,-30,-50]}   ....:)   ....:In [51]:dfOut[51]:   AAA  BBB  CCC0    4   10  1001    5   20   502    6   30  -303    7   40  -50In [52]:df[~((df.AAA<=6)&(df.index.isin([0,2,4])))]Out[52]:   AAA  BBB  CCC1    5   20   503    7   40  -50

New columns#

Efficiently and dynamically creating new columns using DataFrame.map (previously named applymap)

In [53]:df=pd.DataFrame({"AAA":[1,2,1,3],"BBB":[1,1,2,2],"CCC":[2,1,3,1]})In [54]:dfOut[54]:   AAA  BBB  CCC0    1    1    21    2    1    12    1    2    33    3    2    1In [55]:source_cols=df.columns# Or some subset would work tooIn [56]:new_cols=[str(x)+"_cat"forxinsource_cols]In [57]:categories={1:"Alpha",2:"Beta",3:"Charlie"}In [58]:df[new_cols]=df[source_cols].map(categories.get)In [59]:dfOut[59]:   AAA  BBB  CCC  AAA_cat BBB_cat  CCC_cat0    1    1    2    Alpha   Alpha     Beta1    2    1    1     Beta   Alpha    Alpha2    1    2    3    Alpha    Beta  Charlie3    3    2    1  Charlie    Beta    Alpha

Keep other columns when using min() with groupby

In [60]:df=pd.DataFrame(   ....:{"AAA":[1,1,1,2,2,2,3,3],"BBB":[2,1,3,4,5,1,2,3]}   ....:)   ....:In [61]:dfOut[61]:   AAA  BBB0    1    21    1    12    1    33    2    44    2    55    2    16    3    27    3    3

Method 1 : idxmin() to get the index of the minimums

In [62]:df.loc[df.groupby("AAA")["BBB"].idxmin()]Out[62]:   AAA  BBB1    1    15    2    16    3    2

Method 2 : sort then take first of each

In [63]:df.sort_values(by="BBB").groupby("AAA",as_index=False).first()Out[63]:   AAA  BBB0    1    11    2    12    3    2

Notice the same results, with the exception of the index.

Multiindexing#

Themultindexing docs.

Creating a MultiIndex from a labeled frame

In [64]:df=pd.DataFrame(   ....:{   ....:"row":[0,1,2],   ....:"One_X":[1.1,1.1,1.1],   ....:"One_Y":[1.2,1.2,1.2],   ....:"Two_X":[1.11,1.11,1.11],   ....:"Two_Y":[1.22,1.22,1.22],   ....:}   ....:)   ....:In [65]:dfOut[65]:   row  One_X  One_Y  Two_X  Two_Y0    0    1.1    1.2   1.11   1.221    1    1.1    1.2   1.11   1.222    2    1.1    1.2   1.11   1.22# As Labelled IndexIn [66]:df=df.set_index("row")In [67]:dfOut[67]:     One_X  One_Y  Two_X  Two_Yrow0      1.1    1.2   1.11   1.221      1.1    1.2   1.11   1.222      1.1    1.2   1.11   1.22# With Hierarchical ColumnsIn [68]:df.columns=pd.MultiIndex.from_tuples([tuple(c.split("_"))forcindf.columns])In [69]:dfOut[69]:     One        Two       X    Y     X     Yrow0    1.1  1.2  1.11  1.221    1.1  1.2  1.11  1.222    1.1  1.2  1.11  1.22# Now stack & ResetIn [70]:df=df.stack(0,future_stack=True).reset_index(1)In [71]:dfOut[71]:    level_1     X     Yrow0       One  1.10  1.200       Two  1.11  1.221       One  1.10  1.201       Two  1.11  1.222       One  1.10  1.202       Two  1.11  1.22# And fix the labels (Notice the label 'level_1' got added automatically)In [72]:df.columns=["Sample","All_X","All_Y"]In [73]:dfOut[73]:    Sample  All_X  All_Yrow0      One   1.10   1.200      Two   1.11   1.221      One   1.10   1.201      Two   1.11   1.222      One   1.10   1.202      Two   1.11   1.22

Arithmetic#

Performing arithmetic with a MultiIndex that needs broadcasting

In [74]:cols=pd.MultiIndex.from_tuples(   ....:[(x,y)forxin["A","B","C"]foryin["O","I"]]   ....:)   ....:In [75]:df=pd.DataFrame(np.random.randn(2,6),index=["n","m"],columns=cols)In [76]:dfOut[76]:          A                   B                   C          O         I         O         I         O         In  0.469112 -0.282863 -1.509059 -1.135632  1.212112 -0.173215m  0.119209 -1.044236 -0.861849 -2.104569 -0.494929  1.071804In [77]:df=df.div(df["C"],level=1)In [78]:dfOut[78]:          A                   B              C          O         I         O         I    O    In  0.387021  1.633022 -1.244983  6.556214  1.0  1.0m -0.240860 -0.974279  1.741358 -1.963577  1.0  1.0

Slicing#

Slicing a MultiIndex with xs

In [79]:coords=[("AA","one"),("AA","six"),("BB","one"),("BB","two"),("BB","six")]In [80]:index=pd.MultiIndex.from_tuples(coords)In [81]:df=pd.DataFrame([11,22,33,44,55],index,["MyData"])In [82]:dfOut[82]:        MyDataAA one      11   six      22BB one      33   two      44   six      55

To take the cross section of the 1st level and 1st axis the index:

# Note : level and axis are optional, and default to zeroIn [83]:df.xs("BB",level=0,axis=0)Out[83]:     MyDataone      33two      44six      55

…and now the 2nd level of the 1st axis.

In [84]:df.xs("six",level=1,axis=0)Out[84]:    MyDataAA      22BB      55

Slicing a MultiIndex with xs, method #2

In [85]:importitertoolsIn [86]:index=list(itertools.product(["Ada","Quinn","Violet"],["Comp","Math","Sci"]))In [87]:headr=list(itertools.product(["Exams","Labs"],["I","II"]))In [88]:indx=pd.MultiIndex.from_tuples(index,names=["Student","Course"])In [89]:cols=pd.MultiIndex.from_tuples(headr)# Notice these are un-namedIn [90]:data=[[70+x+y+(x*y)%3forxinrange(4)]foryinrange(9)]In [91]:df=pd.DataFrame(data,indx,cols)In [92]:dfOut[92]:               Exams     Labs                   I  II    I  IIStudent CourseAda     Comp      70  71   72  73        Math      71  73   75  74        Sci       72  75   75  75Quinn   Comp      73  74   75  76        Math      74  76   78  77        Sci       75  78   78  78Violet  Comp      76  77   78  79        Math      77  79   81  80        Sci       78  81   81  81In [93]:All=slice(None)In [94]:df.loc["Violet"]Out[94]:       Exams     Labs           I  II    I  IICourseComp      76  77   78  79Math      77  79   81  80Sci       78  81   81  81In [95]:df.loc[(All,"Math"),All]Out[95]:               Exams     Labs                   I  II    I  IIStudent CourseAda     Math      71  73   75  74Quinn   Math      74  76   78  77Violet  Math      77  79   81  80In [96]:df.loc[(slice("Ada","Quinn"),"Math"),All]Out[96]:               Exams     Labs                   I  II    I  IIStudent CourseAda     Math      71  73   75  74Quinn   Math      74  76   78  77In [97]:df.loc[(All,"Math"),("Exams")]Out[97]:                 I  IIStudent CourseAda     Math    71  73Quinn   Math    74  76Violet  Math    77  79In [98]:df.loc[(All,"Math"),(All,"II")]Out[98]:               Exams Labs                  II   IIStudent CourseAda     Math      73   74Quinn   Math      76   77Violet  Math      79   80

Setting portions of a MultiIndex with xs

Sorting#

Sort by specific column or an ordered list of columns, with a MultiIndex

In [99]:df.sort_values(by=("Labs","II"),ascending=False)Out[99]:               Exams     Labs                   I  II    I  IIStudent CourseViolet  Sci       78  81   81  81        Math      77  79   81  80        Comp      76  77   78  79Quinn   Sci       75  78   78  78        Math      74  76   78  77        Comp      73  74   75  76Ada     Sci       72  75   75  75        Math      71  73   75  74        Comp      70  71   72  73

Partial selection, the need for sortednessGH 2995

Levels#

Prepending a level to a multiindex

Flatten Hierarchical columns

Missing data#

Themissing data docs.

Fill forward a reversed timeseries

In [100]:df=pd.DataFrame(   .....:np.random.randn(6,1),   .....:index=pd.date_range("2013-08-01",periods=6,freq="B"),   .....:columns=list("A"),   .....:)   .....:In [101]:df.loc[df.index[3],"A"]=np.nanIn [102]:dfOut[102]:                   A2013-08-01  0.7215552013-08-02 -0.7067712013-08-05 -1.0395752013-08-06       NaN2013-08-07 -0.4249722013-08-08  0.567020In [103]:df.bfill()Out[103]:                   A2013-08-01  0.7215552013-08-02 -0.7067712013-08-05 -1.0395752013-08-06 -0.4249722013-08-07 -0.4249722013-08-08  0.567020

cumsum reset at NaN values

Replace#

Using replace with backrefs

Grouping#

Thegrouping docs.

Basic grouping with apply

Unlike agg, apply’s callable is passed a sub-DataFrame which gives you access to all the columns

In [104]:df=pd.DataFrame(   .....:{   .....:"animal":"cat dog cat fish dog cat cat".split(),   .....:"size":list("SSMMMLL"),   .....:"weight":[8,10,11,1,20,12,12],   .....:"adult":[False]*5+[True]*2,   .....:}   .....:)   .....:In [105]:dfOut[105]:  animal size  weight  adult0    cat    S       8  False1    dog    S      10  False2    cat    M      11  False3   fish    M       1  False4    dog    M      20  False5    cat    L      12   True6    cat    L      12   True# List the size of the animals with the highest weight.In [106]:df.groupby("animal").apply(lambdasubf:subf["size"][subf["weight"].idxmax()],include_groups=False)Out[106]:animalcat     Ldog     Mfish    Mdtype: object

Using get_group

In [107]:gb=df.groupby("animal")In [108]:gb.get_group("cat")Out[108]:  animal size  weight  adult0    cat    S       8  False2    cat    M      11  False5    cat    L      12   True6    cat    L      12   True

Apply to different items in a group

In [109]:defGrowUp(x):   .....:avg_weight=sum(x[x["size"]=="S"].weight*1.5)   .....:avg_weight+=sum(x[x["size"]=="M"].weight*1.25)   .....:avg_weight+=sum(x[x["size"]=="L"].weight)   .....:avg_weight/=len(x)   .....:returnpd.Series(["L",avg_weight,True],index=["size","weight","adult"])   .....:In [110]:expected_df=gb.apply(GrowUp,include_groups=False)In [111]:expected_dfOut[111]:       size   weight  adultanimalcat       L  12.4375   Truedog       L  20.0000   Truefish      L   1.2500   True

Expanding apply

In [112]:S=pd.Series([i/100.0foriinrange(1,11)])In [113]:defcum_ret(x,y):   .....:returnx*(1+y)   .....:In [114]:defred(x):   .....:returnfunctools.reduce(cum_ret,x,1.0)   .....:In [115]:S.expanding().apply(red,raw=True)Out[115]:0    1.0100001    1.0302002    1.0611063    1.1035504    1.1587285    1.2282516    1.3142297    1.4193678    1.5471109    1.701821dtype: float64

Replacing some values with mean of the rest of a group

In [116]:df=pd.DataFrame({"A":[1,1,2,2],"B":[1,-1,1,2]})In [117]:gb=df.groupby("A")In [118]:defreplace(g):   .....:mask=g<0   .....:returng.where(~mask,g[~mask].mean())   .....:In [119]:gb.transform(replace)Out[119]:   B0  11  12  13  2

Sort groups by aggregated data

In [120]:df=pd.DataFrame(   .....:{   .....:"code":["foo","bar","baz"]*2,   .....:"data":[0.16,-0.21,0.33,0.45,-0.59,0.62],   .....:"flag":[False,True]*3,   .....:}   .....:)   .....:In [121]:code_groups=df.groupby("code")In [122]:agg_n_sort_order=code_groups[["data"]].transform("sum").sort_values(by="data")In [123]:sorted_df=df.loc[agg_n_sort_order.index]In [124]:sorted_dfOut[124]:  code  data   flag1  bar -0.21   True4  bar -0.59  False0  foo  0.16  False3  foo  0.45   True2  baz  0.33  False5  baz  0.62   True

Create multiple aggregated columns

In [125]:rng=pd.date_range(start="2014-10-07",periods=10,freq="2min")In [126]:ts=pd.Series(data=list(range(10)),index=rng)In [127]:defMyCust(x):   .....:iflen(x)>2:   .....:returnx.iloc[1]*1.234   .....:returnpd.NaT   .....:In [128]:mhc={"Mean":"mean","Max":"max","Custom":MyCust}In [129]:ts.resample("5min").apply(mhc)Out[129]:                     Mean  Max Custom2014-10-07 00:00:00   1.0    2  1.2342014-10-07 00:05:00   3.5    4    NaT2014-10-07 00:10:00   6.0    7  7.4042014-10-07 00:15:00   8.5    9    NaTIn [130]:tsOut[130]:2014-10-07 00:00:00    02014-10-07 00:02:00    12014-10-07 00:04:00    22014-10-07 00:06:00    32014-10-07 00:08:00    42014-10-07 00:10:00    52014-10-07 00:12:00    62014-10-07 00:14:00    72014-10-07 00:16:00    82014-10-07 00:18:00    9Freq: 2min, dtype: int64

Create a value counts column and reassign back to the DataFrame

In [131]:df=pd.DataFrame(   .....:{"Color":"Red Red Red Blue".split(),"Value":[100,150,50,50]}   .....:)   .....:In [132]:dfOut[132]:  Color  Value0   Red    1001   Red    1502   Red     503  Blue     50In [133]:df["Counts"]=df.groupby(["Color"]).transform(len)In [134]:dfOut[134]:  Color  Value  Counts0   Red    100       31   Red    150       32   Red     50       33  Blue     50       1

Shift groups of the values in a column based on the index

In [135]:df=pd.DataFrame(   .....:{"line_race":[10,10,8,10,10,8],"beyer":[99,102,103,103,88,100]},   .....:index=[   .....:"Last Gunfighter",   .....:"Last Gunfighter",   .....:"Last Gunfighter",   .....:"Paynter",   .....:"Paynter",   .....:"Paynter",   .....:],   .....:)   .....:In [136]:dfOut[136]:                 line_race  beyerLast Gunfighter         10     99Last Gunfighter         10    102Last Gunfighter          8    103Paynter                 10    103Paynter                 10     88Paynter                  8    100In [137]:df["beyer_shifted"]=df.groupby(level=0)["beyer"].shift(1)In [138]:dfOut[138]:                 line_race  beyer  beyer_shiftedLast Gunfighter         10     99            NaNLast Gunfighter         10    102           99.0Last Gunfighter          8    103          102.0Paynter                 10    103            NaNPaynter                 10     88          103.0Paynter                  8    100           88.0

Select row with maximum value from each group

In [139]:df=pd.DataFrame(   .....:{   .....:"host":["other","other","that","this","this"],   .....:"service":["mail","web","mail","mail","web"],   .....:"no":[1,2,1,2,1],   .....:}   .....:).set_index(["host","service"])   .....:In [140]:mask=df.groupby(level=0).agg("idxmax")In [141]:df_count=df.loc[mask["no"]].reset_index()In [142]:df_countOut[142]:    host service  no0  other     web   21   that    mail   12   this    mail   2

Grouping like Python’s itertools.groupby

In [143]:df=pd.DataFrame([0,1,0,1,1,1,0,1,1],columns=["A"])In [144]:df["A"].groupby((df["A"]!=df["A"].shift()).cumsum()).groupsOut[144]:{1: [0], 2: [1], 3: [2], 4: [3, 4, 5], 5: [6], 6: [7, 8]}In [145]:df["A"].groupby((df["A"]!=df["A"].shift()).cumsum()).cumsum()Out[145]:0    01    12    03    14    25    36    07    18    2Name: A, dtype: int64

Expanding data#

Alignment and to-date

Rolling Computation window based on values instead of counts

Rolling Mean by Time Interval

Splitting#

Splitting a frame

Create a list of dataframes, split using a delineation based on logic included in rows.

In [146]:df=pd.DataFrame(   .....:data={   .....:"Case":["A","A","A","B","A","A","B","A","A"],   .....:"Data":np.random.randn(9),   .....:}   .....:)   .....:In [147]:dfs=list(   .....:zip(   .....:*df.groupby(   .....:(1*(df["Case"]=="B"))   .....:.cumsum()   .....:.rolling(window=3,min_periods=1)   .....:.median()   .....:)   .....:)   .....:)[-1]   .....:In [148]:dfs[0]Out[148]:  Case      Data0    A  0.2762321    A -1.0874012    A -0.6736903    B  0.113648In [149]:dfs[1]Out[149]:  Case      Data4    A -1.4784275    A  0.5249886    B  0.404705In [150]:dfs[2]Out[150]:  Case      Data7    A  0.5770468    A -1.715002

Pivot#

ThePivot docs.

Partial sums and subtotals

In [151]:df=pd.DataFrame(   .....:data={   .....:"Province":["ON","QC","BC","AL","AL","MN","ON"],   .....:"City":[   .....:"Toronto",   .....:"Montreal",   .....:"Vancouver",   .....:"Calgary",   .....:"Edmonton",   .....:"Winnipeg",   .....:"Windsor",   .....:],   .....:"Sales":[13,6,16,8,4,3,1],   .....:}   .....:)   .....:In [152]:table=pd.pivot_table(   .....:df,   .....:values=["Sales"],   .....:index=["Province"],   .....:columns=["City"],   .....:aggfunc="sum",   .....:margins=True,   .....:)   .....:In [153]:table.stack("City",future_stack=True)Out[153]:                    SalesProvince CityAL       Calgary      8.0         Edmonton     4.0         Montreal     NaN         Toronto      NaN         Vancouver    NaN...                   ...All      Toronto     13.0         Vancouver   16.0         Windsor      1.0         Winnipeg     3.0         All         51.0[48 rows x 1 columns]

Frequency table like plyr in R

In [154]:grades=[48,99,75,80,42,80,72,68,36,78]In [155]:df=pd.DataFrame(   .....:{   .....:"ID":["x%d"%rforrinrange(10)],   .....:"Gender":["F","M","F","M","F","M","F","M","M","M"],   .....:"ExamYear":[   .....:"2007",   .....:"2007",   .....:"2007",   .....:"2008",   .....:"2008",   .....:"2008",   .....:"2008",   .....:"2009",   .....:"2009",   .....:"2009",   .....:],   .....:"Class":[   .....:"algebra",   .....:"stats",   .....:"bio",   .....:"algebra",   .....:"algebra",   .....:"stats",   .....:"stats",   .....:"algebra",   .....:"bio",   .....:"bio",   .....:],   .....:"Participated":[   .....:"yes",   .....:"yes",   .....:"yes",   .....:"yes",   .....:"no",   .....:"yes",   .....:"yes",   .....:"yes",   .....:"yes",   .....:"yes",   .....:],   .....:"Passed":["yes"ifx>50else"no"forxingrades],   .....:"Employed":[   .....:True,   .....:True,   .....:True,   .....:False,   .....:False,   .....:False,   .....:False,   .....:True,   .....:True,   .....:False,   .....:],   .....:"Grade":grades,   .....:}   .....:)   .....:In [156]:df.groupby("ExamYear").agg(   .....:{   .....:"Participated":lambdax:x.value_counts()["yes"],   .....:"Passed":lambdax:sum(x=="yes"),   .....:"Employed":lambdax:sum(x),   .....:"Grade":lambdax:sum(x)/len(x),   .....:}   .....:)   .....:Out[156]:          Participated  Passed  Employed      GradeExamYear2007                 3       2         3  74.0000002008                 3       3         0  68.5000002009                 3       2         2  60.666667

Plot pandas DataFrame with year over year data

To create year and month cross tabulation:

In [157]:df=pd.DataFrame(   .....:{"value":np.random.randn(36)},   .....:index=pd.date_range("2011-01-01",freq="ME",periods=36),   .....:)   .....:In [158]:pd.pivot_table(   .....:df,index=df.index.month,columns=df.index.year,values="value",aggfunc="sum"   .....:)   .....:Out[158]:        2011      2012      20131  -1.039268 -0.968914  2.5656462  -0.370647 -1.294524  1.4312563  -1.157892  0.413738  1.3403094  -1.344312  0.276662 -1.1702995   0.844885 -0.472035 -0.2261696   1.075770 -0.013960  0.4108357  -0.109050 -0.362543  0.8138508   1.643563 -0.006154  0.1320039  -1.469388 -0.923061 -0.82731710  0.357021  0.895717 -0.07646711 -0.674600  0.805244 -1.18767812 -1.776904 -1.206412  1.130127

Apply#

Rolling apply to organize - Turning embedded lists into a MultiIndex frame

In [159]:df=pd.DataFrame(   .....:data={   .....:"A":[[2,4,8,16],[100,200],[10,20,30]],   .....:"B":[["a","b","c"],["jj","kk"],["ccc"]],   .....:},   .....:index=["I","II","III"],   .....:)   .....:In [160]:defSeriesFromSubList(aList):   .....:returnpd.Series(aList)   .....:In [161]:df_orgz=pd.concat(   .....:{ind:row.apply(SeriesFromSubList)forind,rowindf.iterrows()}   .....:)   .....:In [162]:df_orgzOut[162]:         0     1     2     3I   A    2     4     8  16.0    B    a     b     c   NaNII  A  100   200   NaN   NaN    B   jj    kk   NaN   NaNIII A   10  20.0  30.0   NaN    B  ccc   NaN   NaN   NaN

Rolling apply with a DataFrame returning a Series

Rolling Apply to multiple columns where function calculates a Series before a Scalar from the Series is returned

In [163]:df=pd.DataFrame(   .....:data=np.random.randn(2000,2)/10000,   .....:index=pd.date_range("2001-01-01",periods=2000),   .....:columns=["A","B"],   .....:)   .....:In [164]:dfOut[164]:                   A         B2001-01-01 -0.000144 -0.0001412001-01-02  0.000161  0.0001022001-01-03  0.000057  0.0000882001-01-04 -0.000221  0.0000972001-01-05 -0.000201 -0.000041...              ...       ...2006-06-19  0.000040 -0.0002352006-06-20 -0.000123 -0.0000212006-06-21 -0.000113  0.0001142006-06-22  0.000136  0.0001092006-06-23  0.000027  0.000030[2000 rows x 2 columns]In [165]:defgm(df,const):   .....:v=((((df["A"]+df["B"])+1).cumprod())-1)*const   .....:returnv.iloc[-1]   .....:In [166]:s=pd.Series(   .....:{   .....:df.index[i]:gm(df.iloc[i:min(i+51,len(df)-1)],5)   .....:foriinrange(len(df)-50)   .....:}   .....:)   .....:In [167]:sOut[167]:2001-01-01    0.0009302001-01-02    0.0026152001-01-03    0.0012812001-01-04    0.0011172001-01-05    0.002772                ...2006-04-30    0.0032962006-05-01    0.0026292006-05-02    0.0020812006-05-03    0.0042472006-05-04    0.003928Length: 1950, dtype: float64

Rolling apply with a DataFrame returning a Scalar

Rolling Apply to multiple columns where function returns a Scalar (Volume Weighted Average Price)

In [168]:rng=pd.date_range(start="2014-01-01",periods=100)In [169]:df=pd.DataFrame(   .....:{   .....:"Open":np.random.randn(len(rng)),   .....:"Close":np.random.randn(len(rng)),   .....:"Volume":np.random.randint(100,2000,len(rng)),   .....:},   .....:index=rng,   .....:)   .....:In [170]:dfOut[170]:                Open     Close  Volume2014-01-01 -1.611353 -0.492885    12192014-01-02 -3.000951  0.445794    10542014-01-03 -0.138359 -0.076081    13812014-01-04  0.301568  1.198259    12532014-01-05  0.276381 -0.669831    1728...              ...       ...     ...2014-04-06 -0.040338  0.937843    11882014-04-07  0.359661 -0.285908    18642014-04-08  0.060978  1.714814     9412014-04-09  1.759055 -0.455942    10652014-04-10  0.138185 -1.147008    1453[100 rows x 3 columns]In [171]:defvwap(bars):   .....:return(bars.Close*bars.Volume).sum()/bars.Volume.sum()   .....:In [172]:window=5In [173]:s=pd.concat(   .....:[   .....:(pd.Series(vwap(df.iloc[i:i+window]),index=[df.index[i+window]]))   .....:foriinrange(len(df)-window)   .....:]   .....:)   .....:In [174]:s.round(2)Out[174]:2014-01-06    0.022014-01-07    0.112014-01-08    0.102014-01-09    0.072014-01-10   -0.29              ...2014-04-06   -0.632014-04-07   -0.022014-04-08   -0.032014-04-09    0.342014-04-10    0.29Length: 95, dtype: float64

Timeseries#

Between times

Using indexer between time

Constructing a datetime range that excludes weekends and includes only certain times

Vectorized Lookup

Aggregation and plotting time series

Turn a matrix with hours in columns and days in rows into a continuous row sequence in the form of a time series.How to rearrange a Python pandas DataFrame?

Dealing with duplicates when reindexing a timeseries to a specified frequency

Calculate the first day of the month for each entry in a DatetimeIndex

In [175]:dates=pd.date_range("2000-01-01",periods=5)In [176]:dates.to_period(freq="M").to_timestamp()Out[176]:DatetimeIndex(['2000-01-01', '2000-01-01', '2000-01-01', '2000-01-01',               '2000-01-01'],              dtype='datetime64[ns]', freq=None)

Resampling#

TheResample docs.

Using Grouper instead of TimeGrouper for time grouping of values

Time grouping with some missing values

Valid frequency arguments to GrouperTimeseries

Grouping using a MultiIndex

Using TimeGrouper and another grouping to create subgroups, then apply a custom functionGH 3791

Resampling with custom periods

Resample intraday frame without adding new days

Resample minute data

Resample with groupby

Merge#

TheJoin docs.

Concatenate two dataframes with overlapping index (emulate R rbind)

In [177]:rng=pd.date_range("2000-01-01",periods=6)In [178]:df1=pd.DataFrame(np.random.randn(6,3),index=rng,columns=["A","B","C"])In [179]:df2=df1.copy()

Depending on df construction,ignore_index may be needed

In [180]:df=pd.concat([df1,df2],ignore_index=True)In [181]:dfOut[181]:           A         B         C0  -0.870117 -0.479265 -0.7908551   0.144817  1.726395 -0.4645352  -0.821906  1.597605  0.1873073  -0.128342 -1.511638 -0.2898584   0.399194 -1.430030 -0.6397605   1.115116 -2.012600  1.8106626  -0.870117 -0.479265 -0.7908557   0.144817  1.726395 -0.4645358  -0.821906  1.597605  0.1873079  -0.128342 -1.511638 -0.28985810  0.399194 -1.430030 -0.63976011  1.115116 -2.012600  1.810662

Self Join of a DataFrameGH 2996

In [182]:df=pd.DataFrame(   .....:data={   .....:"Area":["A"]*5+["C"]*2,   .....:"Bins":[110]*2+[160]*3+[40]*2,   .....:"Test_0":[0,1,0,1,2,0,1],   .....:"Data":np.random.randn(7),   .....:}   .....:)   .....:In [183]:dfOut[183]:  Area  Bins  Test_0      Data0    A   110       0 -0.4339371    A   110       1 -0.1605522    A   160       0  0.7444343    A   160       1  1.7542134    A   160       2  0.0008505    C    40       0  0.3422436    C    40       1  1.070599In [184]:df["Test_1"]=df["Test_0"]-1In [185]:pd.merge(   .....:df,   .....:df,   .....:left_on=["Bins","Area","Test_0"],   .....:right_on=["Bins","Area","Test_1"],   .....:suffixes=("_L","_R"),   .....:)   .....:Out[185]:  Area  Bins  Test_0_L    Data_L  Test_1_L  Test_0_R    Data_R  Test_1_R0    A   110         0 -0.433937        -1         1 -0.160552         01    A   160         0  0.744434        -1         1  1.754213         02    A   160         1  1.754213         0         2  0.000850         13    C    40         0  0.342243        -1         1  1.070599         0

How to set the index and join

KDB like asof join

Join with a criteria based on the values

Using searchsorted to merge based on values inside a range

Plotting#

ThePlotting docs.

Make Matplotlib look like R

Setting x-axis major and minor labels

Plotting multiple charts in an IPython Jupyter notebook

Creating a multi-line plot

Plotting a heatmap

Annotate a time-series plot

Annotate a time-series plot #2

Generate Embedded plots in excel files using Pandas, Vincent and xlsxwriter

Boxplot for each quartile of a stratifying variable

In [186]:df=pd.DataFrame(   .....:{   .....:"stratifying_var":np.random.uniform(0,100,20),   .....:"price":np.random.normal(100,5,20),   .....:}   .....:)   .....:In [187]:df["quartiles"]=pd.qcut(   .....:df["stratifying_var"],4,labels=["0-25%","25-50%","50-75%","75-100%"]   .....:)   .....:In [188]:df.boxplot(column="price",by="quartiles")Out[188]:<Axes: title={'center': 'price'}, xlabel='quartiles'>
../_images/quartile_boxplot.png

Data in/out#

Performance comparison of SQL vs HDF5

CSV#

TheCSV docs

read_csv in action

appending to a csv

Reading a csv chunk-by-chunk

Reading only certain rows of a csv chunk-by-chunk

Reading the first few lines of a frame

Reading a file that is compressed but not bygzip/bz2 (the native compressed formats whichread_csv understands).This example shows aWinZipped file, but is a general application of opening the file within a context manager andusing that handle to read.See here

Inferring dtypes from a file

Dealing with bad linesGH 2886

Write a multi-row index CSV without writing duplicates

Reading multiple files to create a single DataFrame#

The best way to combine multiple files into a single DataFrame is to read the individual frames one by one, put allof the individual frames into a list, and then combine the frames in the list usingpd.concat():

In [189]:foriinrange(3):   .....:data=pd.DataFrame(np.random.randn(10,4))   .....:data.to_csv("file_{}.csv".format(i))   .....:In [190]:files=["file_0.csv","file_1.csv","file_2.csv"]In [191]:result=pd.concat([pd.read_csv(f)forfinfiles],ignore_index=True)

You can use the same approach to read all files matching a pattern. Here is an example usingglob:

In [192]:importglobIn [193]:importosIn [194]:files=glob.glob("file_*.csv")In [195]:result=pd.concat([pd.read_csv(f)forfinfiles],ignore_index=True)

Finally, this strategy will work with the otherpd.read_*(...) functions described in theio docs.

Parsing date components in multi-columns#

Parsing date components in multi-columns is faster with a format

In [196]:i=pd.date_range("20000101",periods=10000)In [197]:df=pd.DataFrame({"year":i.year,"month":i.month,"day":i.day})In [198]:df.head()Out[198]:   year  month  day0  2000      1    11  2000      1    22  2000      1    33  2000      1    44  2000      1    5In [199]:%timeit pd.to_datetime(df.year * 10000 + df.month * 100 + df.day, format='%Y%m%d')   .....:ds=df.apply(lambdax:"%04d%02d%02d"%(x["year"],x["month"],x["day"]),axis=1)   .....:ds.head()   .....:%timeit pd.to_datetime(ds)   .....:2.7 ms +- 240 us per loop (mean +- std. dev. of 7 runs, 100 loops each)1.09 ms +- 5.62 us per loop (mean +- std. dev. of 7 runs, 1,000 loops each)

Skip row between header and data#

In [200]:data=""";;;;   .....: ;;;;   .....: ;;;;   .....: ;;;;   .....: ;;;;   .....: ;;;;   .....:;;;;   .....: ;;;;   .....: ;;;;   .....:;;;;   .....:date;Param1;Param2;Param4;Param5   .....:    ;m²;°C;m²;m   .....:;;;;   .....:01.01.1990 00:00;1;1;2;3   .....:01.01.1990 01:00;5;3;4;5   .....:01.01.1990 02:00;9;5;6;7   .....:01.01.1990 03:00;13;7;8;9   .....:01.01.1990 04:00;17;9;10;11   .....:01.01.1990 05:00;21;11;12;13   .....:"""   .....:
Option 1: pass rows explicitly to skip rows#
In [201]:fromioimportStringIOIn [202]:pd.read_csv(   .....:StringIO(data),   .....:sep=";",   .....:skiprows=[11,12],   .....:index_col=0,   .....:parse_dates=True,   .....:header=10,   .....:)   .....:Out[202]:                     Param1  Param2  Param4  Param5date1990-01-01 00:00:00       1       1       2       31990-01-01 01:00:00       5       3       4       51990-01-01 02:00:00       9       5       6       71990-01-01 03:00:00      13       7       8       91990-01-01 04:00:00      17       9      10      111990-01-01 05:00:00      21      11      12      13
Option 2: read column names and then data#
In [203]:pd.read_csv(StringIO(data),sep=";",header=10,nrows=10).columnsOut[203]:Index(['date', 'Param1', 'Param2', 'Param4', 'Param5'], dtype='object')In [204]:columns=pd.read_csv(StringIO(data),sep=";",header=10,nrows=10).columnsIn [205]:pd.read_csv(   .....:StringIO(data),sep=";",index_col=0,header=12,parse_dates=True,names=columns   .....:)   .....:Out[205]:                     Param1  Param2  Param4  Param5date1990-01-01 00:00:00       1       1       2       31990-01-01 01:00:00       5       3       4       51990-01-01 02:00:00       9       5       6       71990-01-01 03:00:00      13       7       8       91990-01-01 04:00:00      17       9      10      111990-01-01 05:00:00      21      11      12      13

SQL#

TheSQL docs

Reading from databases with SQL

Excel#

TheExcel docs

Reading from a filelike handle

Modifying formatting in XlsxWriter output

Loading only visible sheetsGH 19842#issuecomment-892150745

HTML#

Reading HTML tables from a server that cannot handle the default requestheader

HDFStore#

TheHDFStores docs

Simple queries with a Timestamp Index

Managing heterogeneous data using a linked multiple table hierarchyGH 3032

Merging on-disk tables with millions of rows

Avoiding inconsistencies when writing to a store from multiple processes/threads

De-duplicating a large store by chunks, essentially a recursive reduction operation. Shows a function for taking in data fromcsv file and creating a store by chunks, with date parsing as well.See here

Creating a store chunk-by-chunk from a csv file

Appending to a store, while creating a unique index

Large Data work flows

Reading in a sequence of files, then providing a global unique index to a store while appending

Groupby on a HDFStore with low group density

Groupby on a HDFStore with high group density

Hierarchical queries on a HDFStore

Counting with a HDFStore

Troubleshoot HDFStore exceptions

Setting min_itemsize with strings

Using ptrepack to create a completely-sorted-index on a store

Storing Attributes to a group node

In [206]:df=pd.DataFrame(np.random.randn(8,3))In [207]:store=pd.HDFStore("test.h5")In [208]:store.put("df",df)# you can store an arbitrary Python object via pickleIn [209]:store.get_storer("df").attrs.my_attribute={"A":10}In [210]:store.get_storer("df").attrs.my_attributeOut[210]:{'A': 10}

You can create or load a HDFStore in-memory by passing thedriverparameter to PyTables. Changes are only written to disk when the HDFStoreis closed.

In [211]:store=pd.HDFStore("test.h5","w",driver="H5FD_CORE")In [212]:df=pd.DataFrame(np.random.randn(8,3))In [213]:store["test"]=df# only after closing the store, data is written to disk:In [214]:store.close()

Binary files#

pandas readily accepts NumPy record arrays, if you need to read in a binaryfile consisting of an array of C structs. For example, given this C programin a file calledmain.c compiled withgccmain.c-std=gnu99 on a64-bit machine,

#include<stdio.h>#include<stdint.h>typedefstruct_Data{int32_tcount;doubleavg;floatscale;}Data;intmain(intargc,constchar*argv[]){size_tn=10;Datad[n];for(inti=0;i<n;++i){d[i].count=i;d[i].avg=i+1.0;d[i].scale=(float)i+2.0f;}FILE*file=fopen("binary.dat","wb");fwrite(&d,sizeof(Data),n,file);fclose(file);return0;}

the following Python code will read the binary file'binary.dat' into apandasDataFrame, where each element of the struct corresponds to a columnin the frame:

names="count","avg","scale"# note that the offsets are larger than the size of the type because of# struct paddingoffsets=0,8,16formats="i4","f8","f4"dt=np.dtype({"names":names,"offsets":offsets,"formats":formats},align=True)df=pd.DataFrame(np.fromfile("binary.dat",dt))

Note

The offsets of the structure elements may be different depending on thearchitecture of the machine on which the file was created. Using a rawbinary file format like this for general data storage is not recommended, asit is not cross platform. We recommended either HDF5 or parquet, both ofwhich are supported by pandas’ IO facilities.

Computation#

Numerical integration (sample-based) of a time series

Correlation#

Often it’s useful to obtain the lower (or upper) triangular form of a correlation matrix calculated fromDataFrame.corr(). This can be achieved by passing a boolean mask towhere as follows:

In [215]:df=pd.DataFrame(np.random.random(size=(100,5)))In [216]:corr_mat=df.corr()In [217]:mask=np.tril(np.ones_like(corr_mat,dtype=np.bool_),k=-1)In [218]:corr_mat.where(mask)Out[218]:          0         1         2        3   40       NaN       NaN       NaN      NaN NaN1 -0.079861       NaN       NaN      NaN NaN2 -0.236573  0.183801       NaN      NaN NaN3 -0.013795 -0.051975  0.037235      NaN NaN4 -0.031974  0.118342 -0.073499 -0.02063 NaN

Themethod argument withinDataFrame.corr can accept a callable in addition to the named correlation types. Here we compute thedistance correlation matrix for aDataFrame object.

In [219]:defdistcorr(x,y):   .....:n=len(x)   .....:a=np.zeros(shape=(n,n))   .....:b=np.zeros(shape=(n,n))   .....:foriinrange(n):   .....:forjinrange(i+1,n):   .....:a[i,j]=abs(x[i]-x[j])   .....:b[i,j]=abs(y[i]-y[j])   .....:a+=a.T   .....:b+=b.T   .....:a_bar=np.vstack([np.nanmean(a,axis=0)]*n)   .....:b_bar=np.vstack([np.nanmean(b,axis=0)]*n)   .....:A=a-a_bar-a_bar.T+np.full(shape=(n,n),fill_value=a_bar.mean())   .....:B=b-b_bar-b_bar.T+np.full(shape=(n,n),fill_value=b_bar.mean())   .....:cov_ab=np.sqrt(np.nansum(A*B))/n   .....:std_a=np.sqrt(np.sqrt(np.nansum(A**2))/n)   .....:std_b=np.sqrt(np.sqrt(np.nansum(B**2))/n)   .....:returncov_ab/std_a/std_b   .....:In [220]:df=pd.DataFrame(np.random.normal(size=(100,3)))In [221]:df.corr(method=distcorr)Out[221]:          0         1         20  1.000000  0.197613  0.2163281  0.197613  1.000000  0.2087492  0.216328  0.208749  1.000000

Timedeltas#

TheTimedeltas docs.

Using timedeltas

In [222]:importdatetimeIn [223]:s=pd.Series(pd.date_range("2012-1-1",periods=3,freq="D"))In [224]:s-s.max()Out[224]:0   -2 days1   -1 days2    0 daysdtype: timedelta64[ns]In [225]:s.max()-sOut[225]:0   2 days1   1 days2   0 daysdtype: timedelta64[ns]In [226]:s-datetime.datetime(2011,1,1,3,5)Out[226]:0   364 days 20:55:001   365 days 20:55:002   366 days 20:55:00dtype: timedelta64[ns]In [227]:s+datetime.timedelta(minutes=5)Out[227]:0   2012-01-01 00:05:001   2012-01-02 00:05:002   2012-01-03 00:05:00dtype: datetime64[ns]In [228]:datetime.datetime(2011,1,1,3,5)-sOut[228]:0   -365 days +03:05:001   -366 days +03:05:002   -367 days +03:05:00dtype: timedelta64[ns]In [229]:datetime.timedelta(minutes=5)+sOut[229]:0   2012-01-01 00:05:001   2012-01-02 00:05:002   2012-01-03 00:05:00dtype: datetime64[ns]

Adding and subtracting deltas and dates

In [230]:deltas=pd.Series([datetime.timedelta(days=i)foriinrange(3)])In [231]:df=pd.DataFrame({"A":s,"B":deltas})In [232]:dfOut[232]:           A      B0 2012-01-01 0 days1 2012-01-02 1 days2 2012-01-03 2 daysIn [233]:df["New Dates"]=df["A"]+df["B"]In [234]:df["Delta"]=df["A"]-df["New Dates"]In [235]:dfOut[235]:           A      B  New Dates   Delta0 2012-01-01 0 days 2012-01-01  0 days1 2012-01-02 1 days 2012-01-03 -1 days2 2012-01-03 2 days 2012-01-05 -2 daysIn [236]:df.dtypesOut[236]:A             datetime64[ns]B            timedelta64[ns]New Dates     datetime64[ns]Delta        timedelta64[ns]dtype: object

Another example

Values can be set to NaT using np.nan, similar to datetime

In [237]:y=s-s.shift()In [238]:yOut[238]:0      NaT1   1 days2   1 daysdtype: timedelta64[ns]In [239]:y[1]=np.nanIn [240]:yOut[240]:0      NaT1      NaT2   1 daysdtype: timedelta64[ns]

Creating example data#

To create a dataframe from every combination of some given values, like R’sexpand.grid()function, we can create a dict where the keys are column names and the values are listsof the data values:

In [241]:defexpand_grid(data_dict):   .....:rows=itertools.product(*data_dict.values())   .....:returnpd.DataFrame.from_records(rows,columns=data_dict.keys())   .....:In [242]:df=expand_grid(   .....:{"height":[60,70],"weight":[100,140,180],"sex":["Male","Female"]}   .....:)   .....:In [243]:dfOut[243]:    height  weight     sex0       60     100    Male1       60     100  Female2       60     140    Male3       60     140  Female4       60     180    Male5       60     180  Female6       70     100    Male7       70     100  Female8       70     140    Male9       70     140  Female10      70     180    Male11      70     180  Female

Constant series#

To assess if a series has a constant value, we can check ifseries.nunique()<=1.However, a more performant approach, that does not count all unique values first, is:

In [244]:v=s.to_numpy()In [245]:is_constant=v.shape[0]==0or(s[0]==s).all()

This approach assumes that the series does not contain missing values.For the case that we would drop NA values, we can simply remove those values first:

In [246]:v=s.dropna().to_numpy()In [247]:is_constant=v.shape[0]==0or(s[0]==s).all()

If missing values are considered distinct from any other value, then one could use:

In [248]:v=s.to_numpy()In [249]:is_constant=v.shape[0]==0or(s[0]==s).all()ornotpd.notna(v).any()

(Note that this example does not disambiguate betweennp.nan,pd.NA andNone)


[8]ページ先頭

©2009-2025 Movatter.jp