Filesystem Interface#
PyArrow comes with an abstract filesystem interface, as well as concreteimplementations for various storage types.
The filesystem interface provides input and output streams as well asdirectory operations. A simplified view of the underlying datastorage is exposed. Data paths are represented asabstract paths, whichare/-separated, even on Windows, and shouldn’t include special pathcomponents such as. and... Symbolic links, if supported by theunderlying storage, are automatically dereferenced. Only basicmetadata about file entries, such as the file sizeand modification time, is made available.
The core interface is represented by the base classFileSystem.
Pyarrow implements natively the following filesystem subclasses:
It is also possible to use your own fsspec-compliant filesystem with pyarrow functionalities as described in the sectionUsing fsspec-compatible filesystems with Arrow.
Usage#
Instantiating a filesystem#
A FileSystem object can be created with one of the constructors (and check therespective constructor for its options):
>>>frompyarrowimportfs>>>local=fs.LocalFileSystem()
or alternatively inferred from a URI:
>>>s3,path=fs.FileSystem.from_uri("s3://my-bucket")>>>s3<pyarrow._s3fs.S3FileSystem at 0x7f6760cbf4f0>>>>path'my-bucket'
Reading and writing files#
Several of the IO-related functions in PyArrow accept either a URI (and inferthe filesystem) or an explicitfilesystem argument to specify the filesystemto read or write from. For example, thepyarrow.parquet.read_table()function can be used in the following ways:
importpyarrow.parquetaspq# using a URI -> filesystem is inferredpq.read_table("s3://my-bucket/data.parquet")# using a path and filesystems3=fs.S3FileSystem(..)pq.read_table("my-bucket/data.parquet",filesystem=s3)
The filesystem interface further allows to open files for reading (input) orwriting (output) directly, which can be combined with functions that work withfile-like objects. For example:
importpyarrowaspalocal=fs.LocalFileSystem()withlocal.open_output_stream("test.arrow")asfile:withpa.RecordBatchFileWriter(file,table.schema)aswriter:writer.write_table(table)
Listing files#
Inspecting the directories and files on a filesystem can be done with theFileSystem.get_file_info() method. To list the contents of a directory,use theFileSelector object to specify the selection:
>>>local.get_file_info(fs.FileSelector("dataset/",recursive=True))[<FileInfo for 'dataset/part=B': type=FileType.Directory>, <FileInfo for 'dataset/part=B/data0.parquet': type=FileType.File, size=1564>, <FileInfo for 'dataset/part=A': type=FileType.Directory>, <FileInfo for 'dataset/part=A/data0.parquet': type=FileType.File, size=1564>]
This returns a list ofFileInfo objects, containing information aboutthe type (file or directory), the size, the date last modified, etc.
You can also get this information for a single explicit path (or list ofpaths):
>>>local.get_file_info('test.arrow')<FileInfo for 'test.arrow': type=FileType.File, size=3250>>>>local.get_file_info('non_existent')<FileInfo for 'non_existent': type=FileType.NotFound>
Local FS#
TheLocalFileSystem allows you to access files on the local machine.
Example how to write to disk and read it back:
>>>frompyarrowimportfs>>>local=fs.LocalFileSystem()>>>withlocal.open_output_stream('/tmp/pyarrowtest.dat')asstream: stream.write(b'data')4>>>withlocal.open_input_stream('/tmp/pyarrowtest.dat')asstream: print(stream.readall())b'data'
S3#
PyArrow implements natively a S3 filesystem for S3 compatible storage.
TheS3FileSystem constructor has several options to configure the S3connection (e.g. credentials, the region, an endpoint override, etc). Inaddition, the constructor will also inspect configured S3 credentials assupported by AWS (such as theAWS_ACCESS_KEY_ID andAWS_SECRET_ACCESS_KEY environment variables, AWS configuration files,and EC2 Instance Metadata Service for EC2 nodes).
Example how you can read contents from a S3 bucket:
>>>frompyarrowimportfs>>>s3=fs.S3FileSystem(region='eu-west-3')# List all contents in a bucket, recursively>>>s3.get_file_info(fs.FileSelector('my-test-bucket',recursive=True))[<FileInfo for 'my-test-bucket/File1': type=FileType.File, size=10>, <FileInfo for 'my-test-bucket/File5': type=FileType.File, size=10>, <FileInfo for 'my-test-bucket/Dir1': type=FileType.Directory>, <FileInfo for 'my-test-bucket/Dir2': type=FileType.Directory>, <FileInfo for 'my-test-bucket/EmptyDir': type=FileType.Directory>, <FileInfo for 'my-test-bucket/Dir1/File2': type=FileType.File, size=11>, <FileInfo for 'my-test-bucket/Dir1/Subdir': type=FileType.Directory>, <FileInfo for 'my-test-bucket/Dir2/Subdir': type=FileType.Directory>, <FileInfo for 'my-test-bucket/Dir2/Subdir/File3': type=FileType.File, size=10>]# Open a file for reading and download its contents>>>f=s3.open_input_stream('my-test-bucket/Dir1/File2')>>>f.readall()b'some data'
Note that it is important to configureS3FileSystem with the correctregion for the bucket being used. Ifregion is not set, the AWS SDK willchoose a value, defaulting to ‘us-east-1’ if the SDK version is <1.8.Otherwise it will try to use a variety of heuristics (environment variables,configuration profile, EC2 metadata server) to resolve the region.
It is also possible to resolve the region from the bucket name forS3FileSystem by usingpyarrow.fs.resolve_s3_region() orpyarrow.fs.S3FileSystem.from_uri().
Here are a couple examples in code:
>>>frompyarrowimportfs>>>s3=fs.S3FileSystem(region=fs.resolve_s3_region('my-test-bucket'))# Or via URI:>>>s3,path=fs.S3FileSystem.from_uri('s3://[access_key:secret_key@]bucket/path]')
See also
See theAWS docsfor the different ways to configure the AWS credentials.
pyarrow.fs.resolve_s3_region() for resolving region from a bucket name.
Troubleshooting#
When usingS3FileSystem, output is only produced for fatal errors orwhen printing return values. For troubleshooting, the log level can be set usingthe environment variableARROW_S3_LOG_LEVEL. The log level must be set priorto running any code that interacts with S3. Possible values includeFATAL (thedefault),ERROR,WARN,INFO,DEBUG (recommended),TRACE, andOFF.
Google Cloud Storage File System#
PyArrow implements natively a Google Cloud Storage (GCS) backed file systemfor GCS storage.
If not running on Google Cloud Platform (GCP), this generally requires theenvironment variableGOOGLE_APPLICATION_CREDENTIALS to point to aJSON file containing credentials. Alternatively, use thegcloud CLI togenerate a credentials file in the default location:
gcloudauthapplication-defaultlogin
To connect to a public bucket without using any credentials, you must passanonymous=True toGcsFileSystem. Otherwise, the filesystemwill reportCouldn'tresolvehostname since there are different hostnames for authenticated and public access.
Example showing how you can read contents from a GCS bucket:
>>>fromdatetimeimporttimedelta>>>frompyarrowimportfs>>>gcs=fs.GcsFileSystem(anonymous=True,retry_time_limit=timedelta(seconds=15))# List all contents in a bucket, recursively>>>uri="gcp-public-data-landsat/LC08/01/001/003/">>>file_list=gcs.get_file_info(fs.FileSelector(uri,recursive=True))# Open a file for reading and download its contents>>>f=gcs.open_input_stream(file_list[0].path)>>>f.read(64)b'GROUP = FILE_HEADER\n LANDSAT_SCENE_ID = "LC80010032013082LGN03"\n S'
See also
TheGcsFileSystem constructor by default uses theprocess described inGCS docsto resolve credentials.
Hadoop Distributed File System (HDFS)#
PyArrow comes with bindings to the Hadoop File System (based on C++ bindingsusinglibhdfs, a JNI-based interface to the Java Hadoop client). You connectusing theHadoopFileSystem constructor:
frompyarrowimportfshdfs=fs.HadoopFileSystem(host,port,user=user,kerb_ticket=ticket_cache_path)
Thelibhdfs library is loadedat runtime (rather than at link / libraryload time, since the library may not be in your LD_LIBRARY_PATH), and relies onsome environment variables.
HADOOP_HOME: the root of your installed Hadoop distribution. Often haslib/native/libhdfs.so.JAVA_HOME: the location of your Java SDK installation.ARROW_LIBHDFS_DIR(optional): explicit location oflibhdfs.soif it isinstalled somewhere other than$HADOOP_HOME/lib/native.CLASSPATH: must contain the Hadoop jars. You can set these using:exportCLASSPATH=`$HADOOP_HOME/bin/hadoopclasspath--glob`# or on Windows%HADOOP_HOME%/bin/hadoopclasspath--glob>%CLASSPATH%
In contrast to the legacy HDFS filesystem with
pa.hdfs.connect, settingCLASSPATHis not optional (pyarrow will not attempt to infer it).
Azure Storage File System#
PyArrow implements natively an Azure filesystem for Azure Blob Storage with orwithout heirarchical namespace enabled.
TheAzureFileSystem constructor has several options to configure theAzure Blob Storage connection (e.g. account name, account key, SAS token, etc.).
If neitheraccount_key orsas_token is specified aDefaultAzureCredentialis used for authentication. This means it will try several types of authenticationand go with the first one that works. If any authentication parameters are provided wheninitialising the FileSystem, they will be used instead of the default credential.
Example showing how you can read contents from an Azure Blob Storage account:
>>>frompyarrowimportfs>>>azure_fs=fs.AzureFileSystem(account_name='myaccount')# List all contents in a container, recursively>>>azure_fs.get_file_info(fs.FileSelector('my-container',recursive=True))[<FileInfo for 'my-container/File1': type=FileType.File, size=10>, <FileInfo for 'my-container/File2': type=FileType.File, size=20>, <FileInfo for 'my-container/Dir1': type=FileType.Directory>, <FileInfo for 'my-container/Dir1/File3': type=FileType.File, size=30>]# Open a file for reading and download its contents>>>f=azure_fs.open_input_stream('my-container/File1')>>>f.readall()b'some data'
For more details on the parameters and usage, refer to theAzureFileSystem class documentation.
See also
See theAzure SDK for C++ documentationfor more information on authentication and configuration options.
Using fsspec-compatible filesystems with Arrow#
The filesystems mentioned above are natively supported by Arrow C++ / PyArrow.The Python ecosystem, however, also has several filesystem packages. Thosepackages following thefsspec interface can be used in PyArrow as well.
Functions accepting a filesystem object will also accept an fsspec subclass.For example:
# creating an fsspec-based filesystem object for Google Cloud Storageimportgcsfsfs=gcsfs.GCSFileSystem(project='my-google-project')# using this to read a partitioned datasetimportpyarrow.datasetasdsds.dataset("data/",filesystem=fs)
Similarly for Azure Blob Storage:
importadlfs# ... load your credentials and configure the filesystemfs=adlfs.AzureBlobFileSystem(account_name=account_name,account_key=account_key)importpyarrow.datasetasdsds.dataset("mycontainer/data/",filesystem=fs)
Under the hood, the fsspec filesystem object is wrapped into a python-basedPyArrow filesystem (PyFileSystem) usingFSSpecHandler.You can also manually do this to get an object with the PyArrow FileSysteminterface:
frompyarrow.fsimportPyFileSystem,FSSpecHandlerpa_fs=PyFileSystem(FSSpecHandler(fs))
Then all the functionalities ofFileSystem are accessible:
# write datawithpa_fs.open_output_stream('mycontainer/pyarrowtest.dat')asstream:stream.write(b'data')# read datawithpa_fs.open_input_stream('mycontainer/pyarrowtest.dat')asstream:print(stream.readall())#b'data'# read a partitioned datasetds.dataset("data/",filesystem=pa_fs)
Using fsspec-compatible filesystem URIs#
PyArrow can automatically instantiate fsspec filesystems by prefixing the URIscheme withfsspec+. This allows you to use the fsspec-compatiblefilesystems directly with PyArrow’s IO functions without needing to manuallycreate a filesystem object. Example writing and reading a Parquet fileusing an in-memory filesystem provided byfsspec:
importpyarrowaspaimportpyarrow.parquetaspqtable=pa.table({'a':[1,2,3]})pq.write_table(table,"fsspec+memory://path/to/my_table.parquet")pq.read_table("fsspec+memory://path/to/my_table.parquet")
Example reading parquet file from GitHub directly:
pq.read_table("fsspec+github://apache:arrow-testing@/data/parquet/alltypes-java.parquet")
Hugging Face URIs are explicitly allowed as a shortcut without needing to prefixwithfsspec+. This is useful for reading datasets hosted on Hugging Face:
pq.read_table("hf://datasets/stanfordnlp/imdb/plain_text/train-00000-of-00001.parquet")
Using Arrow filesystems with fsspec#
The Arrow FileSystem interface has a limited, developer-oriented API surface.This is sufficient for basic interactions and for using this withArrow’s IO functionality. On the other hand, thefsspec interface providesa very large API with many helper methods. If you want to use those, or if youneed to interact with a package that expects fsspec-compatible filesystemobjects, you can wrap an Arrow FileSystem object with fsspec.
Starting withfsspec version 2021.09, theArrowFSWrapper can be usedfor this:
>>>frompyarrowimportfs>>>local=fs.LocalFileSystem()>>>fromfsspec.implementations.arrowimportArrowFSWrapper>>>local_fsspec=ArrowFSWrapper(local)
The resulting object now has an fsspec-compatible interface, while being backedby the Arrow FileSystem under the hood.Example usage to create a directory and file, and list the content:
>>>local_fsspec.mkdir("./test")>>>local_fsspec.touch("./test/file.txt")>>>local_fsspec.ls("./test/")['./test/file.txt']
For more information, see thefsspec documentation.

