Getting Started#
Arrow manages data in arrays (pyarrow.Array), which can begrouped in tables (pyarrow.Table) to represent columns of datain tabular data.
Arrow also provides support for various formats to get those tabulardata in and out of disk and networks. Most commonly used formats areParquet (Reading and Writing the Apache Parquet Format) and the IPC format (Streaming, Serialization, and IPC).
Creating Arrays and Tables#
Arrays in Arrow are collections of data of uniform type. That allowsArrow to use the best performing implementation to store the data andperform computations on it. So each array is meant to have data anda type
In [1]:importpyarrowaspaIn [2]:days=pa.array([1,12,17,23,28],type=pa.int8())
Multiple arrays can be combined in tables to form the columnsin tabular data when attached to a column name
In [3]:months=pa.array([1,3,5,7,1],type=pa.int8())In [4]:years=pa.array([1990,2000,1995,2000,1995],type=pa.int16())In [5]:birthdays_table=pa.table([days,months,years], ...:names=["days","months","years"]) ...:In [6]:birthdays_tableOut[6]:pyarrow.Tabledays: int8months: int8years: int16----days: [[1,12,17,23,28]]months: [[1,3,5,7,1]]years: [[1990,2000,1995,2000,1995]]
SeeData Types and In-Memory Data Model for more details.
Saving and Loading Tables#
Once you have tabular data, Arrow provides out of the boxthe features to save and restore that data for common formatslike Parquet:
In [7]:importpyarrow.parquetaspqIn [8]:pq.write_table(birthdays_table,'birthdays.parquet')
Once you have your data on disk, loading it back is a single function call,and Arrow is heavily optimized for memory and speed so loadingdata will be as quick as possible
In [9]:reloaded_birthdays=pq.read_table('birthdays.parquet')In [10]:reloaded_birthdaysOut[10]:pyarrow.Tabledays: int8months: int8years: int16----days: [[1,12,17,23,28]]months: [[1,3,5,7,1]]years: [[1990,2000,1995,2000,1995]]
Saving and loading back data in arrow is usually done throughParquet,IPC format (Feather File Format),CSV orLine-Delimited JSON formats.
Performing Computations#
Arrow ships with a bunch of compute functions that can be appliedto its arrays and tables, so through the compute functionsit’s possible to apply transformations to the data
In [11]:importpyarrow.computeaspcIn [12]:pc.value_counts(birthdays_table["years"])Out[12]:<pyarrow.lib.StructArray object at 0x7fe0006005e0>-- is_valid: all not null-- child 0 type: int16 [ 1990, 2000, 1995 ]-- child 1 type: int64 [ 1, 2, 2 ]
SeeCompute Functions for a list of available compute functions andhow to use them.
Working with large data#
Arrow also provides thepyarrow.dataset API to work withlarge data, which will handle for you partitioning of your data insmaller chunks
In [13]:importpyarrow.datasetasdsIn [14]:ds.write_dataset(birthdays_table,"savedir",format="parquet", ....:partitioning=ds.partitioning( ....:pa.schema([birthdays_table.schema.field("years")]) ....:)) ....:
Loading back the partitioned dataset will detect the chunks
In [15]:birthdays_dataset=ds.dataset("savedir",format="parquet",partitioning=["years"])In [16]:birthdays_dataset.filesOut[16]:['savedir/1990/part-0.parquet', 'savedir/1995/part-0.parquet', 'savedir/2000/part-0.parquet']
and will lazily load chunks of data only when iterating over them
In [17]:importdatetimeIn [18]:current_year=datetime.datetime.now(datetime.UTC).yearIn [19]:fortable_chunkinbirthdays_dataset.to_batches(): ....:print("AGES",pc.subtract(current_year,table_chunk["years"])) ....:AGES [ 35]AGES [ 30, 30]AGES [ 25, 25]
For further details on how to work with big datasets, how to filter them,how to project them, etc., refer toTabular Datasets documentation.
Continuing from here#
For digging further into Arrow, you might want to read thePyArrow Documentation itself or theArrow Python Cookbook

