pyarrow.dataset.write_dataset#

pyarrow.dataset.write_dataset(data,base_dir,*,basename_template=None,format=None,partitioning=None,partitioning_flavor=None,schema=None,filesystem=None,file_options=None,use_threads=True,preserve_order=False,max_partitions=None,max_open_files=None,max_rows_per_file=None,min_rows_per_group=None,max_rows_per_group=None,file_visitor=None,existing_data_behavior='error',create_dir=True)[source]#

Write a dataset to a given format and partitioning.

Parameters:
dataDataset, Table/RecordBatch, RecordBatchReader,list of Table/RecordBatch, oriterable ofRecordBatch

The data to write. This can be a Dataset instance orin-memory Arrow data. If an iterable is given, the schema mustalso be given.

base_dirstr

The root directory where to write the dataset.

basename_templatestr, optional

A template string used to generate basenames of written data files.The token ‘{i}’ will be replaced with an automatically incrementedinteger. If not specified, it defaults to“part-{i}.” + format.default_extname

formatFileFormat orstr

The format in which to write the dataset. Currently supported:“parquet”, “ipc”/”arrow”/”feather”, and “csv”. If a FileSystemDatasetis being written andformat is not specified, it defaults to thesame format as the specified FileSystemDataset. When writing aTable or RecordBatch, this keyword is required.

partitioningPartitioning orlist[str], optional

The partitioning scheme specified with thepartitioning()function or a list of field names. When providing a list offield names, you can usepartitioning_flavor to drive whichpartitioning type should be used.

partitioning_flavorstr, optional

One of the partitioning flavors supported bypyarrow.dataset.partitioning. If omitted will use thedefault ofpartitioning() which is directory partitioning.

schemaSchema, optional
filesystemFileSystem, optional
file_optionspyarrow.dataset.FileWriteOptions, optional

FileFormat specific write options, created using theFileFormat.make_write_options() function.

use_threadsbool, defaultTrue

Write files in parallel. If enabled, then maximum parallelism will beused determined by the number of available CPU cores. Using multiplethreads may change the order of rows in the written dataset ifpreserve_order is set to False.

preserve_orderbool, defaultFalse

Preserve the order of rows. If enabled, order of rows in the dataset areguaranteed to be preserved even if use_threads is set to True. This maycause notable performance degradation.

max_partitionsint, default 1024

Maximum number of partitions any batch may be written into.

max_open_filesint, default 1024

If greater than 0 then this will limit the maximum number offiles that can be left open. If an attempt is made to opentoo many files then the least recently used file will be closed.If this setting is set too low you may end up fragmenting yourdata into many small files.

max_rows_per_fileint, default 0

Maximum number of rows per file. If greater than 0 then this willlimit how many rows are placed in any single file. Otherwise therewill be no limit and one file will be created in each outputdirectory unless files need to be closed to respect max_open_files

min_rows_per_groupint, default 0

Minimum number of rows per group. When the value is greater than 0,the dataset writer will batch incoming data and only write the rowgroups to the disk when sufficient rows have accumulated.

max_rows_per_groupint, default 1024 * 1024

Maximum number of rows per group. If the value is greater than 0,then the dataset writer may split up large incoming batches intomultiple row groups. If this value is set, then min_rows_per_groupshould also be set. Otherwise it could end up with very small rowgroups.

file_visitorfunction

If set, this function will be called with a WrittenFile instancefor each file created during the call. This object will have botha path attribute and a metadata attribute.

The path attribute will be a string containing the path tothe created file.

The metadata attribute will be the parquet metadata of the file.This metadata will have the file path attribute set and can be usedto build a _metadata file. The metadata attribute will be None ifthe format is not parquet.

Example visitor which simple collects the filenames created:

visited_paths=[]deffile_visitor(written_file):visited_paths.append(written_file.path)
existing_data_behavior‘error’ | ‘overwrite_or_ignore’ | ‘delete_matching’

Controls how the dataset will handle data that already exists inthe destination. The default behavior (‘error’) is to raise an errorif any data exists in the destination.

‘overwrite_or_ignore’ will ignore any existing data and willoverwrite files with the same name as an output file. Otherexisting files will be ignored. This behavior, in combinationwith a unique basename_template for each write, will allow foran append workflow.

‘delete_matching’ is useful when you are writing a partitioneddataset. The first time each partition directory is encounteredthe entire directory will be deleted. This allows you to overwriteold partitions completely.

create_dirbool, defaultTrue

If False, directories will not be created. This can be useful forfilesystems that do not require directories.