pyarrow.ChunkedArray#
- classpyarrow.ChunkedArray#
Bases:
_PandasConvertibleAn array-like composed from a (possibly empty) collection of pyarrow.Arrays
Warning
Do not call this class’s constructor directly.
Examples
To construct a ChunkedArray object use
pyarrow.chunked_array():>>>importpyarrowaspa>>>pa.chunked_array([],type=pa.int8())<pyarrow.lib.ChunkedArray object at ...>[...]
>>>pa.chunked_array([[2,2,4],[4,5,100]])<pyarrow.lib.ChunkedArray object at ...>[ [ 2, 2, 4 ], [ 4, 5, 100 ]]>>>isinstance(pa.chunked_array([[2,2,4],[4,5,100]]),pa.ChunkedArray)True
- __init__(*args,**kwargs)#
Methods
__init__(*args, **kwargs)cast(self[, target_type, safe, options])Cast array values to another data type
chunk(self, i)Select a chunk by its index.
combine_chunks(self, MemoryPool memory_pool=None)Flatten this ChunkedArray into a single non-chunked array.
dictionary_encode(self[, null_encoding])Compute dictionary-encoded representation of array.
drop_null(self)Remove missing values from a chunked array.
equals(self, ChunkedArray other)Return whether the contents of two chunked arrays are equal.
fill_null(self, fill_value)Replace each null element in values with fill_value.
filter(self, mask[, null_selection_behavior])Select values from the chunked array.
flatten(self, MemoryPool memory_pool=None)Flatten this ChunkedArray.
format(self, **kwargs)DEPRECATED, use pyarrow.ChunkedArray.to_string
get_total_buffer_size(self)The sum of bytes in each buffer referenced by the chunked array.
index(self, value[, start, end, memory_pool])Find the first index of a value.
is_nan(self)Return boolean array indicating the NaN values.
is_null(self, *[, nan_is_null])Return boolean array indicating the null values.
is_valid(self)Return boolean array indicating the non-null values.
iterchunks(self)Convert to an iterator of ChunkArrays.
length(self)Return length of a ChunkedArray.
slice(self[, offset, length])Compute zero-copy slice of this ChunkedArray
sort(self[, order])Sort the ChunkedArray
take(self, indices)Select values from the chunked array.
to_numpy(self[, zero_copy_only])Return a NumPy copy of this array (experimental).
to_pandas(self[, memory_pool, categories, ...])Convert to a pandas-compatible NumPy array or DataFrame, as appropriate
to_pylist(self, *[, maps_as_pydicts])Convert to a list of native Python objects.
to_string(self, *, int indent=0, ...)Render a "pretty-printed" string representation of the ChunkedArray
unify_dictionaries(self, ...)Unify dictionaries across all chunks.
unique(self)Compute distinct elements in array
validate(self, *[, full])Perform validation checks.
value_counts(self)Compute counts of unique elements in array.
Attributes
Convert to a list of single-chunked arrays.
Whether all chunks in the ChunkedArray are CPU-accessible.
Total number of bytes consumed by the elements of the chunked array.
Number of null entries
Number of underlying chunks.
Return data type of a ChunkedArray.
- cast(self,target_type=None,safe=None,options=None)#
Cast array values to another data type
See
pyarrow.compute.cast()for usage.- Parameters:
- Returns:
- cast
ArrayorChunkedArray
- cast
Examples
>>>importpyarrowaspa>>>n_legs=pa.chunked_array([[2,2,4],[4,5,100]])>>>n_legs.typeDataType(int64)
Change the data type of an array:
>>>n_legs_seconds=n_legs.cast(pa.duration('s'))>>>n_legs_seconds.typeDurationType(duration[s])
- chunk(self,i)#
Select a chunk by its index.
- Parameters:
- i
int
- i
- Returns:
Examples
>>>importpyarrowaspa>>>n_legs=pa.chunked_array([[2,2,None],[4,5,100]])>>>n_legs.chunk(1)<pyarrow.lib.Int64Array object at ...>[ 4, 5, 100]
- chunks#
Convert to a list of single-chunked arrays.
Examples
>>>importpyarrowaspa>>>n_legs=pa.chunked_array([[2,2,None],[4,5,100]])>>>n_legs<pyarrow.lib.ChunkedArray object at ...>[ [ 2, 2, null ], [ 4, 5, 100 ]]>>>n_legs.chunks[<pyarrow.lib.Int64Array object at ...>[ 2, 2, null], <pyarrow.lib.Int64Array object at ...>[ 4, 5, 100]]
- combine_chunks(self,MemoryPoolmemory_pool=None)#
Flatten this ChunkedArray into a single non-chunked array.
- Parameters:
- memory_pool
MemoryPool, defaultNone For memory allocations, if required, otherwise use default pool
- memory_pool
- Returns:
- result
Array
- result
Examples
>>>importpyarrowaspa>>>n_legs=pa.chunked_array([[2,2,4],[4,5,100]])>>>n_legs<pyarrow.lib.ChunkedArray object at ...>[ [ 2, 2, 4 ], [ 4, 5, 100 ]]>>>n_legs.combine_chunks()<pyarrow.lib.Int64Array object at ...>[ 2, 2, 4, 4, 5, 100]
- data#
- dictionary_encode(self,null_encoding='mask')#
Compute dictionary-encoded representation of array.
See
pyarrow.compute.dictionary_encode()for full usage.- Parameters:
- null_encoding
str, default “mask” How to handle null entries.
- null_encoding
- Returns:
- encoded
ChunkedArray A dictionary-encoded version of this array.
- encoded
Examples
>>>importpyarrowaspa>>>animals=pa.chunked_array((...["Flamingo","Parrot","Dog"],...["Horse","Brittle stars","Centipede"]...))>>>animals.dictionary_encode()<pyarrow.lib.ChunkedArray object at ...>[... -- dictionary: [ "Flamingo", "Parrot", "Dog", "Horse", "Brittle stars", "Centipede" ] -- indices: [ 0, 1, 2 ],... -- dictionary: [ "Flamingo", "Parrot", "Dog", "Horse", "Brittle stars", "Centipede" ] -- indices: [ 3, 4, 5 ]]
- drop_null(self)#
Remove missing values from a chunked array.See
pyarrow.compute.drop_null()for full description.Examples
>>>importpyarrowaspa>>>n_legs=pa.chunked_array([[2,2,None],[4,5,100]])>>>n_legs<pyarrow.lib.ChunkedArray object at ...>[ [ 2, 2, null ], [ 4, 5, 100 ]]>>>n_legs.drop_null()<pyarrow.lib.ChunkedArray object at ...>[ [ 2, 2 ], [ 4, 5, 100 ]]
- equals(self,ChunkedArrayother)#
Return whether the contents of two chunked arrays are equal.
- Parameters:
- other
pyarrow.ChunkedArray Chunked array to compare against.
- other
- Returns:
- are_equalbool
Examples
>>>importpyarrowaspa>>>n_legs=pa.chunked_array([[2,2,4],[4,5,100]])>>>animals=pa.chunked_array((...["Flamingo","Parrot","Dog"],...["Horse","Brittle stars","Centipede"]...))>>>n_legs.equals(n_legs)True>>>n_legs.equals(animals)False
- fill_null(self,fill_value)#
Replace each null element in values with fill_value.
See
pyarrow.compute.fill_null()for full usage.- Parameters:
- fill_value
any The replacement value for null entries.
- fill_value
- Returns:
- result
ArrayorChunkedArray A new array with nulls replaced by the given value.
- result
Examples
>>>importpyarrowaspa>>>fill_value=pa.scalar(5,type=pa.int8())>>>n_legs=pa.chunked_array([[2,2,4],[4,None,100]])>>>n_legs.fill_null(fill_value)<pyarrow.lib.ChunkedArray object at ...>[ [ 2, 2, 4, 4, 5, 100 ]]
- filter(self,mask,null_selection_behavior='drop')#
Select values from the chunked array.
See
pyarrow.compute.filter()for full usage.- Parameters:
- mask
Arrayorarray-like The boolean mask to filter the chunked array with.
- null_selection_behavior
str, default “drop” How nulls in the mask should be handled.
- mask
- Returns:
- filtered
ArrayorChunkedArray An array of the same type, with only the elements selected bythe boolean mask.
- filtered
Examples
>>>importpyarrowaspa>>>n_legs=pa.chunked_array([[2,2,4],[4,5,100]])>>>n_legs<pyarrow.lib.ChunkedArray object at ...>[ [ 2, 2, 4 ], [ 4, 5, 100 ]]>>>mask=pa.array([True,False,None,True,False,True])>>>n_legs.filter(mask)<pyarrow.lib.ChunkedArray object at ...>[ [ 2 ], [ 4, 100 ]]>>>n_legs.filter(mask,null_selection_behavior="emit_null")<pyarrow.lib.ChunkedArray object at ...>[ [ 2, null ], [ 4, 100 ]]
- flatten(self,MemoryPoolmemory_pool=None)#
Flatten this ChunkedArray. If it has a struct type, the column isflattened into one array per struct field.
- Parameters:
- memory_pool
MemoryPool, defaultNone For memory allocations, if required, otherwise use default pool
- memory_pool
- Returns:
- result
listofChunkedArray
- result
Examples
>>>importpyarrowaspa>>>n_legs=pa.chunked_array([[2,2,4],[4,5,100]])>>>c_arr=pa.chunked_array(n_legs.value_counts())>>>c_arr<pyarrow.lib.ChunkedArray object at ...>[ -- is_valid: all not null -- child 0 type: int64 [ 2, 4, 5, 100 ] -- child 1 type: int64 [ 2, 2, 1, 1 ]]>>>c_arr.flatten()[<pyarrow.lib.ChunkedArray object at ...>[ [ 2, 4, 5, 100 ]], <pyarrow.lib.ChunkedArray object at ...>[ [ 2, 2, 1, 1 ]]]>>>c_arr.typeStructType(struct<values: int64, counts: int64>)>>>n_legs.typeDataType(int64)
- format(self,**kwargs)#
DEPRECATED, use pyarrow.ChunkedArray.to_string
- get_total_buffer_size(self)#
The sum of bytes in each buffer referenced by the chunked array.
An array may only reference a portion of a buffer.This method will overestimate in this case and return thebyte size of the entire buffer.
If a buffer is referenced multiple times then it willonly be counted once.
Examples
>>>importpyarrowaspa>>>n_legs=pa.chunked_array([[2,2,4],[4,None,100]])>>>n_legs.get_total_buffer_size()49
- index(self,value,start=None,end=None,*,memory_pool=None)#
Find the first index of a value.
See
pyarrow.compute.index()for full usage.- Parameters:
- value
Scalaror object The value to look for in the array.
- start
int, optional The start index where to look forvalue.
- end
int, optional The end index where to look forvalue.
- memory_pool
MemoryPool, optional A memory pool for potential memory allocations.
- value
- Returns:
- index
Int64Scalar The index of the value in the array (-1 if not found).
- index
Examples
>>>importpyarrowaspa>>>n_legs=pa.chunked_array([[2,2,4],[4,5,100]])>>>n_legs<pyarrow.lib.ChunkedArray object at ...>[ [ 2, 2, 4 ], [ 4, 5, 100 ]]>>>n_legs.index(4)<pyarrow.Int64Scalar: 2>>>>n_legs.index(4,start=3)<pyarrow.Int64Scalar: 3>
- is_cpu#
Whether all chunks in the ChunkedArray are CPU-accessible.
- is_nan(self)#
Return boolean array indicating the NaN values.
Examples
>>>importpyarrowaspa>>>importnumpyasnp>>>arr=pa.chunked_array([[2,np.nan,4],[4,None,100]])>>>arr.is_nan()<pyarrow.lib.ChunkedArray object at ...>[ [ false, true, false, false, null, false ]]
- is_null(self,*,nan_is_null=False)#
Return boolean array indicating the null values.
- Parameters:
- Returns:
- arraybool
ArrayorChunkedArray
- arraybool
Examples
>>>importpyarrowaspa>>>n_legs=pa.chunked_array([[2,2,4],[4,None,100]])>>>n_legs.is_null()<pyarrow.lib.ChunkedArray object at ...>[ [ false, false, false, false, true, false ]]
- is_valid(self)#
Return boolean array indicating the non-null values.
Examples
>>>importpyarrowaspa>>>n_legs=pa.chunked_array([[2,2,4],[4,None,100]])>>>n_legs.is_valid()<pyarrow.lib.ChunkedArray object at ...>[ [ true, true, true ], [ true, false, true ]]
- iterchunks(self)#
Convert to an iterator of ChunkArrays.
Examples
>>>importpyarrowaspa>>>n_legs=pa.chunked_array([[2,2,4],[4,None,100]])>>>foriinn_legs.iterchunks():...print(i.null_count)...01
- length(self)#
Return length of a ChunkedArray.
Examples
>>>importpyarrowaspa>>>n_legs=pa.chunked_array([[2,2,4],[4,5,100]])>>>n_legs.length()6
- nbytes#
Total number of bytes consumed by the elements of the chunked array.
In other words, the sum of bytes from all buffer ranges referenced.
Unlikeget_total_buffer_size this method will account for arrayoffsets.
If buffers are shared between arrays then the sharedportion will only be counted multiple times.
The dictionary of dictionary arrays will always be counted in theirentirety even if the array only references a portion of the dictionary.
Examples
>>>importpyarrowaspa>>>n_legs=pa.chunked_array([[2,2,4],[4,None,100]])>>>n_legs.nbytes49
- null_count#
Number of null entries
- Returns:
Examples
>>>importpyarrowaspa>>>n_legs=pa.chunked_array([[2,2,4],[4,None,100]])>>>n_legs.null_count1
- num_chunks#
Number of underlying chunks.
- Returns:
Examples
>>>importpyarrowaspa>>>n_legs=pa.chunked_array([[2,2,None],[4,5,100]])>>>n_legs.num_chunks2
- slice(self,offset=0,length=None)#
Compute zero-copy slice of this ChunkedArray
- Parameters:
- Returns:
- sliced
ChunkedArray
- sliced
Examples
>>>importpyarrowaspa>>>n_legs=pa.chunked_array([[2,2,4],[4,5,100]])>>>n_legs<pyarrow.lib.ChunkedArray object at ...>[ [ 2, 2, 4 ], [ 4, 5, 100 ]]>>>n_legs.slice(2,2)<pyarrow.lib.ChunkedArray object at ...>[ [ 4 ], [ 4 ]]
- sort(self,order='ascending',**kwargs)#
Sort the ChunkedArray
- Parameters:
- Returns:
- result
ChunkedArray
- result
- take(self,indices)#
Select values from the chunked array.
See
pyarrow.compute.take()for full usage.- Parameters:
- indices
Arrayorarray-like The indices in the array whose values will be returned.
- indices
- Returns:
- taken
ArrayorChunkedArray An array with the same datatype, containing the taken values.
- taken
Examples
>>>importpyarrowaspa>>>n_legs=pa.chunked_array([[2,2,4],[4,5,100]])>>>n_legs<pyarrow.lib.ChunkedArray object at ...>[ [ 2, 2, 4 ], [ 4, 5, 100 ]]>>>n_legs.take([1,4,5])<pyarrow.lib.ChunkedArray object at ...>[ [ 2, 5, 100 ]]
- to_numpy(self,zero_copy_only=False)#
Return a NumPy copy of this array (experimental).
- Parameters:
- Returns:
- array
numpy.ndarray
- array
Examples
>>>importpyarrowaspa>>>n_legs=pa.chunked_array([[2,2,4],[4,5,100]])>>>n_legs.to_numpy()array([ 2, 2, 4, 4, 5, 100])
- to_pandas(self,memory_pool=None,categories=None,boolstrings_to_categorical=False,boolzero_copy_only=False,boolinteger_object_nulls=False,booldate_as_object=True,booltimestamp_as_object=False,booluse_threads=True,booldeduplicate_objects=True,boolignore_metadata=False,boolsafe=True,boolsplit_blocks=False,boolself_destruct=False,strmaps_as_pydicts=None,types_mapper=None,boolcoerce_temporal_nanoseconds=False)#
Convert to a pandas-compatible NumPy array or DataFrame, as appropriate
- Parameters:
- memory_pool
MemoryPool, defaultNone Arrow MemoryPool to use for allocations. Uses the default memorypool if not passed.
- categories
list, defaultempty List of fields that should be returned as pandas.Categorical. Onlyapplies to table-like data structures.
- strings_to_categoricalbool, default
False Encode string (UTF8) and binary types to pandas.Categorical.
- zero_copy_onlybool, default
False Raise an ArrowException if this function call would require copyingthe underlying data.
- integer_object_nullsbool, default
False Cast integers with nulls to objects
- date_as_objectbool, default
True Cast dates to objects. If False, convert to datetime64 dtype withthe equivalent time unit (if supported). Note: in pandas version< 2.0, only datetime64[ns] conversion is supported.
- timestamp_as_objectbool, default
False Cast non-nanosecond timestamps (np.datetime64) to objects. This isuseful in pandas version 1.x if you have timestamps that don’t fitin the normal date range of nanosecond timestamps (1678 CE-2262 CE).Non-nanosecond timestamps are supported in pandas version 2.0.If False, all timestamps are converted to datetime64 dtype.
- use_threadsbool, default
True Whether to parallelize the conversion using multiple threads.
- deduplicate_objectsbool, default
True Do not create multiple copies Python objects when created, to saveon memory use. Conversion will be slower.
- ignore_metadatabool, default
False If True, do not use the ‘pandas’ metadata to reconstruct theDataFrame index, if present
- safebool, default
True For certain data types, a cast is needed in order to store thedata in a pandas DataFrame or Series (e.g. timestamps are alwaysstored as nanoseconds in pandas). This option controls whether itis a safe cast or not.
- split_blocksbool, default
False If True, generate one internal “block” for each column whencreating a pandas.DataFrame from a RecordBatch or Table. While thiscan temporarily reduce memory note that various pandas operationscan trigger “consolidation” which may balloon memory use.
- self_destructbool, default
False EXPERIMENTAL: If True, attempt to deallocate the originating Arrowmemory while converting the Arrow object to pandas. If you use theobject after calling to_pandas with this option it will crash yourprogram.
Note that you may not see always memory usage improvements. Forexample, if multiple columns share an underlying allocation,memory can’t be freed until all columns are converted.
- maps_as_pydicts
str, optional, defaultNone Valid values areNone, ‘lossy’, or ‘strict’.The default behavior (None), is to convert Arrow Map arrays toPython association lists (list-of-tuples) in the same order as theArrow Map, as in [(key1, value1), (key2, value2), …].
If ‘lossy’ or ‘strict’, convert Arrow Map arrays to native Python dicts.This can change the ordering of (key, value) pairs, and willdeduplicate multiple keys, resulting in a possible loss of data.
If ‘lossy’, this key deduplication results in a warning printedwhen detected. If ‘strict’, this instead results in an exceptionbeing raised when detected.
- types_mapperfunction, default
None A function mapping a pyarrow DataType to a pandas ExtensionDtype.This can be used to override the default pandas type for conversionof built-in pyarrow types or in absence of pandas_metadata in theTable schema. The function receives a pyarrow DataType and isexpected to return a pandas ExtensionDtype or
Noneif thedefault conversion should be used for that type. If you havea dictionary mapping, you can passdict.getas function.- coerce_temporal_nanosecondsbool, default
False Only applicable to pandas version >= 2.0.A legacy option to coerce date32, date64, duration, and timestamptime units to nanoseconds when converting to pandas. This is thedefault behavior in pandas version 1.x. Set this option to True ifyou’d like to use this coercion when using pandas version >= 2.0for backwards compatibility (not recommended otherwise).
- memory_pool
- Returns:
pandas.Seriesorpandas.DataFramedepending ontypeof object
Examples
>>>importpyarrowaspa>>>importpandasaspd
Convert a Table to pandas DataFrame:
>>>table=pa.table([...pa.array([2,4,5,100]),...pa.array(["Flamingo","Horse","Brittle stars","Centipede"])...],names=['n_legs','animals'])>>>table.to_pandas() n_legs animals0 2 Flamingo1 4 Horse2 5 Brittle stars3 100 Centipede>>>isinstance(table.to_pandas(),pd.DataFrame)True
Convert a RecordBatch to pandas DataFrame:
>>>importpyarrowaspa>>>n_legs=pa.array([2,4,5,100])>>>animals=pa.array(["Flamingo","Horse","Brittle stars","Centipede"])>>>batch=pa.record_batch([n_legs,animals],...names=["n_legs","animals"])>>>batchpyarrow.RecordBatchn_legs: int64animals: string----n_legs: [2,4,5,100]animals: ["Flamingo","Horse","Brittle stars","Centipede"]>>>batch.to_pandas() n_legs animals0 2 Flamingo1 4 Horse2 5 Brittle stars3 100 Centipede>>>isinstance(batch.to_pandas(),pd.DataFrame)True
Convert a Chunked Array to pandas Series:
>>>importpyarrowaspa>>>n_legs=pa.chunked_array([[2,2,4],[4,5,100]])>>>n_legs.to_pandas()0 21 22 43 44 55 100dtype: int64>>>isinstance(n_legs.to_pandas(),pd.Series)True
- to_pylist(self,*,maps_as_pydicts=None)#
Convert to a list of native Python objects.
- Parameters:
- maps_as_pydicts
str, optional, defaultNone Valid values areNone, ‘lossy’, or ‘strict’.The default behavior (None), is to convert Arrow Map arrays toPython association lists (list-of-tuples) in the same order as theArrow Map, as in [(key1, value1), (key2, value2), …].
If ‘lossy’ or ‘strict’, convert Arrow Map arrays to native Python dicts.
If ‘lossy’, whenever duplicate keys are detected, a warning will be printed.The last seen value of a duplicate key will be in the Python dictionary.If ‘strict’, this instead results in an exception being raised when detected.
- maps_as_pydicts
Examples
>>>importpyarrowaspa>>>n_legs=pa.chunked_array([[2,2,4],[4,None,100]])>>>n_legs.to_pylist()[2, 2, 4, 4, None, 100]
- to_string(self,*,intindent=0,intwindow=5,intcontainer_window=2,boolskip_new_lines=False,intelement_size_limit=100)#
Render a “pretty-printed” string representation of the ChunkedArray
- Parameters:
- indent
int How much to indent right the content of the array,by default
0.- window
int How many items to preview within each chunk at the begin and endof the chunk when the chunk is bigger than the window.The other elements will be ellipsed.
- container_window
int How many chunks to preview at the begin and endof the array when the array is bigger than the window.The other elements will be ellipsed.This setting also applies to list columns.
- skip_new_linesbool
If the array should be rendered as a single line of textor if each element should be on its own line.
- element_size_limit
int, default 100 Maximum number of characters of a single element before it is truncated.
- indent
Examples
>>>importpyarrowaspa>>>n_legs=pa.chunked_array([[2,2,4],[4,5,100]])>>>n_legs.to_string(skip_new_lines=True)'[[2,2,4],[4,5,100]]'
- type#
Return data type of a ChunkedArray.
Examples
>>>importpyarrowaspa>>>n_legs=pa.chunked_array([[2,2,4],[4,5,100]])>>>n_legs.typeDataType(int64)
- unify_dictionaries(self,MemoryPoolmemory_pool=None)#
Unify dictionaries across all chunks.
This method returns an equivalent chunked array, but where allchunks share the same dictionary values. Dictionary indices aretransposed accordingly.
If there are no dictionaries in the chunked array, it is returnedunchanged.
- Parameters:
- memory_pool
MemoryPool, defaultNone For memory allocations, if required, otherwise use default pool
- memory_pool
- Returns:
- result
ChunkedArray
- result
Examples
>>>importpyarrowaspa>>>arr_1=pa.array(["Flamingo","Parrot","Dog"]).dictionary_encode()>>>arr_2=pa.array(["Horse","Brittle stars","Centipede"]).dictionary_encode()>>>c_arr=pa.chunked_array([arr_1,arr_2])>>>c_arr<pyarrow.lib.ChunkedArray object at ...>[... -- dictionary: [ "Flamingo", "Parrot", "Dog" ] -- indices: [ 0, 1, 2 ],... -- dictionary: [ "Horse", "Brittle stars", "Centipede" ] -- indices: [ 0, 1, 2 ]]>>>c_arr.unify_dictionaries()<pyarrow.lib.ChunkedArray object at ...>[... -- dictionary: [ "Flamingo", "Parrot", "Dog", "Horse", "Brittle stars", "Centipede" ] -- indices: [ 0, 1, 2 ],... -- dictionary: [ "Flamingo", "Parrot", "Dog", "Horse", "Brittle stars", "Centipede" ] -- indices: [ 3, 4, 5 ]]
- unique(self)#
Compute distinct elements in array
- Returns:
Examples
>>>importpyarrowaspa>>>n_legs=pa.chunked_array([[2,2,4],[4,5,100]])>>>n_legs<pyarrow.lib.ChunkedArray object at ...>[ [ 2, 2, 4 ], [ 4, 5, 100 ]]>>>n_legs.unique()<pyarrow.lib.Int64Array object at ...>[ 2, 4, 5, 100]
- validate(self,*,full=False)#
Perform validation checks. An exception is raised if validation fails.
By default only cheap validation checks are run. Passfull=Truefor thorough validation checks (potentially O(n)).
- value_counts(self)#
Compute counts of unique elements in array.
Examples
>>>importpyarrowaspa>>>n_legs=pa.chunked_array([[2,2,4],[4,5,100]])>>>n_legs<pyarrow.lib.ChunkedArray object at ...>[ [ 2, 2, 4 ], [ 4, 5, 100 ]]>>>n_legs.value_counts()<pyarrow.lib.StructArray object at ...>-- is_valid: all not null-- child 0 type: int64 [ 2, 4, 5, 100 ]-- child 1 type: int64 [ 2, 2, 1, 1 ]

