pyarrow.dataset.Fragment#

classpyarrow.dataset.Fragment#

Bases:_Weakrefable

Fragment of data from a Dataset.

__init__(*args,**kwargs)#

Methods

__init__(*args, **kwargs)

count_rows(self, Expression filter=None, ...)

Count rows matching the scanner filter.

head(self, int num_rows[, columns])

Load the first N rows of the fragment.

scanner(self, Schema schema=None[, columns])

Build a scan operation against the fragment.

take(self, indices[, columns])

Select rows of data by index.

to_batches(self, Schema schema=None[, columns])

Read the fragment as materialized record batches.

to_table(self, Schema schema=None[, columns])

Convert this Fragment into a Table.

Attributes

partition_expression

An Expression which evaluates to true for all data viewed by this Fragment.

physical_schema

Return the physical schema of this Fragment.

count_rows(self,Expressionfilter=None,intbatch_size=_DEFAULT_BATCH_SIZE,intbatch_readahead=_DEFAULT_BATCH_READAHEAD,intfragment_readahead=_DEFAULT_FRAGMENT_READAHEAD,FragmentScanOptionsfragment_scan_options=None,booluse_threads=True,boolcache_metadata=True,MemoryPoolmemory_pool=None)#

Count rows matching the scanner filter.

Parameters:
filterExpression, defaultNone

Scan will return only the rows matching the filter.If possible the predicate will be pushed down to exploit thepartition information or internal metadata found in the datasource, e.g. Parquet statistics. Otherwise filters the loadedRecordBatches before yielding them.

batch_sizeint, default 131_072

The maximum row count for scanned record batches. If scannedrecord batches are overflowing memory then this method can becalled to reduce their size.

batch_readaheadint, default 16

The number of batches to read ahead in a file. This might not workfor all file formats. Increasing this number will increaseRAM usage but could also improve IO utilization.

fragment_readaheadint, default 4

The number of files to read ahead. Increasing this number will increaseRAM usage but could also improve IO utilization.

fragment_scan_optionsFragmentScanOptions, defaultNone

Options specific to a particular scan and fragment type, whichcan change between different scans of the same dataset.

use_threadsbool, defaultTrue

If enabled, then maximum parallelism will be used determined bythe number of available CPU cores.

cache_metadatabool, defaultTrue

If enabled, metadata may be cached when scanning to speed uprepeated scans.

memory_poolMemoryPool, defaultNone

For memory allocations, if required. If not specified, uses thedefault pool.

Returns:
countint
head(self,intnum_rows,columns=None,Expressionfilter=None,intbatch_size=_DEFAULT_BATCH_SIZE,intbatch_readahead=_DEFAULT_BATCH_READAHEAD,intfragment_readahead=_DEFAULT_FRAGMENT_READAHEAD,FragmentScanOptionsfragment_scan_options=None,booluse_threads=True,boolcache_metadata=True,MemoryPoolmemory_pool=None)#

Load the first N rows of the fragment.

Parameters:
num_rowsint

The number of rows to load.

columnslist ofstr, defaultNone

The columns to project. This can be a list of column names toinclude (order and duplicates will be preserved), or a dictionarywith {new_column_name: expression} values for more advancedprojections.

The list of columns or expressions may use the special fields__batch_index (the index of the batch within the fragment),__fragment_index (the index of the fragment within the dataset),__last_in_fragment (whether the batch is last in fragment), and__filename (the name of the source file or a description of thesource fragment).

The columns will be passed down to Datasets and corresponding datafragments to avoid loading, copying, and deserializing columnsthat will not be required further down the compute chain.By default all of the available columns are projected. Raisesan exception if any of the referenced column names does not existin the dataset’s Schema.

filterExpression, defaultNone

Scan will return only the rows matching the filter.If possible the predicate will be pushed down to exploit thepartition information or internal metadata found in the datasource, e.g. Parquet statistics. Otherwise filters the loadedRecordBatches before yielding them.

batch_sizeint, default 131_072

The maximum row count for scanned record batches. If scannedrecord batches are overflowing memory then this method can becalled to reduce their size.

batch_readaheadint, default 16

The number of batches to read ahead in a file. This might not workfor all file formats. Increasing this number will increaseRAM usage but could also improve IO utilization.

fragment_readaheadint, default 4

The number of files to read ahead. Increasing this number will increaseRAM usage but could also improve IO utilization.

fragment_scan_optionsFragmentScanOptions, defaultNone

Options specific to a particular scan and fragment type, whichcan change between different scans of the same dataset.

use_threadsbool, defaultTrue

If enabled, then maximum parallelism will be used determined bythe number of available CPU cores.

cache_metadatabool, defaultTrue

If enabled, metadata may be cached when scanning to speed uprepeated scans.

memory_poolMemoryPool, defaultNone

For memory allocations, if required. If not specified, uses thedefault pool.

Returns:
Table
partition_expression#

An Expression which evaluates to true for all data viewed by thisFragment.

physical_schema#

Return the physical schema of this Fragment. This schema can bedifferent from the dataset read schema.

scanner(self,Schemaschema=None,columns=None,Expressionfilter=None,intbatch_size=_DEFAULT_BATCH_SIZE,intbatch_readahead=_DEFAULT_BATCH_READAHEAD,intfragment_readahead=_DEFAULT_FRAGMENT_READAHEAD,FragmentScanOptionsfragment_scan_options=None,booluse_threads=True,boolcache_metadata=True,MemoryPoolmemory_pool=None)#

Build a scan operation against the fragment.

Data is not loaded immediately. Instead, this produces a Scanner,which exposes further operations (e.g. loading all data as atable, counting rows).

Parameters:
schemaSchema

Schema to use for scanning. This is used to unify a Fragment toits Dataset’s schema. If not specified this will use theFragment’s physical schema which might differ for each Fragment.

columnslist ofstr, defaultNone

The columns to project. This can be a list of column names toinclude (order and duplicates will be preserved), or a dictionarywith {new_column_name: expression} values for more advancedprojections.

The list of columns or expressions may use the special fields__batch_index (the index of the batch within the fragment),__fragment_index (the index of the fragment within the dataset),__last_in_fragment (whether the batch is last in fragment), and__filename (the name of the source file or a description of thesource fragment).

The columns will be passed down to Datasets and corresponding datafragments to avoid loading, copying, and deserializing columnsthat will not be required further down the compute chain.By default all of the available columns are projected. Raisesan exception if any of the referenced column names does not existin the dataset’s Schema.

filterExpression, defaultNone

Scan will return only the rows matching the filter.If possible the predicate will be pushed down to exploit thepartition information or internal metadata found in the datasource, e.g. Parquet statistics. Otherwise filters the loadedRecordBatches before yielding them.

batch_sizeint, default 131_072

The maximum row count for scanned record batches. If scannedrecord batches are overflowing memory then this method can becalled to reduce their size.

batch_readaheadint, default 16

The number of batches to read ahead in a file. This might not workfor all file formats. Increasing this number will increaseRAM usage but could also improve IO utilization.

fragment_readaheadint, default 4

The number of files to read ahead. Increasing this number will increaseRAM usage but could also improve IO utilization.

fragment_scan_optionsFragmentScanOptions, defaultNone

Options specific to a particular scan and fragment type, whichcan change between different scans of the same dataset.

use_threadsbool, defaultTrue

If enabled, then maximum parallelism will be used determined bythe number of available CPU cores.

cache_metadatabool, defaultTrue

If enabled, metadata may be cached when scanning to speed uprepeated scans.

memory_poolMemoryPool, defaultNone

For memory allocations, if required. If not specified, uses thedefault pool.

Returns:
scannerScanner
take(self,indices,columns=None,Expressionfilter=None,intbatch_size=_DEFAULT_BATCH_SIZE,intbatch_readahead=_DEFAULT_BATCH_READAHEAD,intfragment_readahead=_DEFAULT_FRAGMENT_READAHEAD,FragmentScanOptionsfragment_scan_options=None,booluse_threads=True,boolcache_metadata=True,MemoryPoolmemory_pool=None)#

Select rows of data by index.

Parameters:
indicesArray orarray-like

The indices of row to select in the dataset.

columnslist ofstr, defaultNone

The columns to project. This can be a list of column names toinclude (order and duplicates will be preserved), or a dictionarywith {new_column_name: expression} values for more advancedprojections.

The list of columns or expressions may use the special fields__batch_index (the index of the batch within the fragment),__fragment_index (the index of the fragment within the dataset),__last_in_fragment (whether the batch is last in fragment), and__filename (the name of the source file or a description of thesource fragment).

The columns will be passed down to Datasets and corresponding datafragments to avoid loading, copying, and deserializing columnsthat will not be required further down the compute chain.By default all of the available columns are projected. Raisesan exception if any of the referenced column names does not existin the dataset’s Schema.

filterExpression, defaultNone

Scan will return only the rows matching the filter.If possible the predicate will be pushed down to exploit thepartition information or internal metadata found in the datasource, e.g. Parquet statistics. Otherwise filters the loadedRecordBatches before yielding them.

batch_sizeint, default 131_072

The maximum row count for scanned record batches. If scannedrecord batches are overflowing memory then this method can becalled to reduce their size.

batch_readaheadint, default 16

The number of batches to read ahead in a file. This might not workfor all file formats. Increasing this number will increaseRAM usage but could also improve IO utilization.

fragment_readaheadint, default 4

The number of files to read ahead. Increasing this number will increaseRAM usage but could also improve IO utilization.

fragment_scan_optionsFragmentScanOptions, defaultNone

Options specific to a particular scan and fragment type, whichcan change between different scans of the same dataset.

use_threadsbool, defaultTrue

If enabled, then maximum parallelism will be used determined bythe number of available CPU cores.

cache_metadatabool, defaultTrue

If enabled, metadata may be cached when scanning to speed uprepeated scans.

memory_poolMemoryPool, defaultNone

For memory allocations, if required. If not specified, uses thedefault pool.

Returns:
Table
to_batches(self,Schemaschema=None,columns=None,Expressionfilter=None,intbatch_size=_DEFAULT_BATCH_SIZE,intbatch_readahead=_DEFAULT_BATCH_READAHEAD,intfragment_readahead=_DEFAULT_FRAGMENT_READAHEAD,FragmentScanOptionsfragment_scan_options=None,booluse_threads=True,boolcache_metadata=True,MemoryPoolmemory_pool=None)#

Read the fragment as materialized record batches.

Parameters:
schemaSchema, optional

Concrete schema to use for scanning.

columnslist ofstr, defaultNone

The columns to project. This can be a list of column names toinclude (order and duplicates will be preserved), or a dictionarywith {new_column_name: expression} values for more advancedprojections.

The list of columns or expressions may use the special fields__batch_index (the index of the batch within the fragment),__fragment_index (the index of the fragment within the dataset),__last_in_fragment (whether the batch is last in fragment), and__filename (the name of the source file or a description of thesource fragment).

The columns will be passed down to Datasets and corresponding datafragments to avoid loading, copying, and deserializing columnsthat will not be required further down the compute chain.By default all of the available columns are projected. Raisesan exception if any of the referenced column names does not existin the dataset’s Schema.

filterExpression, defaultNone

Scan will return only the rows matching the filter.If possible the predicate will be pushed down to exploit thepartition information or internal metadata found in the datasource, e.g. Parquet statistics. Otherwise filters the loadedRecordBatches before yielding them.

batch_sizeint, default 131_072

The maximum row count for scanned record batches. If scannedrecord batches are overflowing memory then this method can becalled to reduce their size.

batch_readaheadint, default 16

The number of batches to read ahead in a file. This might not workfor all file formats. Increasing this number will increaseRAM usage but could also improve IO utilization.

fragment_readaheadint, default 4

The number of files to read ahead. Increasing this number will increaseRAM usage but could also improve IO utilization.

fragment_scan_optionsFragmentScanOptions, defaultNone

Options specific to a particular scan and fragment type, whichcan change between different scans of the same dataset.

use_threadsbool, defaultTrue

If enabled, then maximum parallelism will be used determined bythe number of available CPU cores.

cache_metadatabool, defaultTrue

If enabled, metadata may be cached when scanning to speed uprepeated scans.

memory_poolMemoryPool, defaultNone

For memory allocations, if required. If not specified, uses thedefault pool.

Returns:
record_batchesiterator ofRecordBatch
to_table(self,Schemaschema=None,columns=None,Expressionfilter=None,intbatch_size=_DEFAULT_BATCH_SIZE,intbatch_readahead=_DEFAULT_BATCH_READAHEAD,intfragment_readahead=_DEFAULT_FRAGMENT_READAHEAD,FragmentScanOptionsfragment_scan_options=None,booluse_threads=True,boolcache_metadata=True,MemoryPoolmemory_pool=None)#

Convert this Fragment into a Table.

Use this convenience utility with care. This will serially materializethe Scan result in memory before creating the Table.

Parameters:
schemaSchema, optional

Concrete schema to use for scanning.

columnslist ofstr, defaultNone

The columns to project. This can be a list of column names toinclude (order and duplicates will be preserved), or a dictionarywith {new_column_name: expression} values for more advancedprojections.

The list of columns or expressions may use the special fields__batch_index (the index of the batch within the fragment),__fragment_index (the index of the fragment within the dataset),__last_in_fragment (whether the batch is last in fragment), and__filename (the name of the source file or a description of thesource fragment).

The columns will be passed down to Datasets and corresponding datafragments to avoid loading, copying, and deserializing columnsthat will not be required further down the compute chain.By default all of the available columns are projected. Raisesan exception if any of the referenced column names does not existin the dataset’s Schema.

filterExpression, defaultNone

Scan will return only the rows matching the filter.If possible the predicate will be pushed down to exploit thepartition information or internal metadata found in the datasource, e.g. Parquet statistics. Otherwise filters the loadedRecordBatches before yielding them.

batch_sizeint, default 131_072

The maximum row count for scanned record batches. If scannedrecord batches are overflowing memory then this method can becalled to reduce their size.

batch_readaheadint, default 16

The number of batches to read ahead in a file. This might not workfor all file formats. Increasing this number will increaseRAM usage but could also improve IO utilization.

fragment_readaheadint, default 4

The number of files to read ahead. Increasing this number will increaseRAM usage but could also improve IO utilization.

fragment_scan_optionsFragmentScanOptions, defaultNone

Options specific to a particular scan and fragment type, whichcan change between different scans of the same dataset.

use_threadsbool, defaultTrue

If enabled, then maximum parallelism will be used determined bythe number of available CPU cores.

cache_metadatabool, defaultTrue

If enabled, metadata may be cached when scanning to speed uprepeated scans.

memory_poolMemoryPool, defaultNone

For memory allocations, if required. If not specified, uses thedefault pool.

Returns:
tableTable