Movatterモバイル変換


[0]ホーム

URL:


Python Pandas Tutorial

Python Pandas to_parquet() Method



Parquet is a partitioned binary columnar serialization for DataFrames. It is a file format designed for efficient reading and writing of DataFrames and can provide the options for easy sharing data across data analysis languages. It supports various compression methods to reduce file size while ensuring efficient reading performance.

The PandasDataFrame.to_parquet() method allows you to save DataFrames inParquet file format, enabling easy data sharing and storage capabilities. This format fully supports all Pandas data types, including specialized types like datetime with timezone information.

Before using theDataFrame.to_parquet() method, you need to install either the 'pyarrow' or 'fastparquet' library, depending on the selected engine. These libraries are optional Python dependencies and can be installed using the following commands −

pip install pyarrow# orpip install fastparquet

Syntax

Following is the syntax of the Python Pandas to_parquet() method −

DataFrame.to_parquet(path=None, *, engine='auto', compression='snappy', index=None, partition_cols=None, storage_options=None, **kwargs)

Parameters

The Python PandasDataFrame.to_parquet() method accepts the below parameters −

  • path: This parameter accepts a string, path object, or file-like object, representing the file path where the Parquet file will be saved. IfNone, the result is returned as bytes.

  • engine: It specifies Parquet library to use. Available options are 'auto', 'pyarrow', or 'fastparquet'. If 'auto' is set, then the optionio.parquet.engine is used.

  • compression: Specifies the name of the compression to use. If set toNone, no compression will be applied.

  • index: Whether to include the DataFrame's index in the output file.

  • partition_cols: Takes the column names by which to partition the dataset.

  • storage_options: Additional options for connecting to certain storage back-ends (e.g., AWS S3, Google Cloud Storage).

  • **kwargs: Additional key-word arguments passed to theparquet library.

Return Value

The PandasDataFrame.to_parquet() method returnsbytes ifpath parameter is set toNone. Otherwise, returnsNone but saves the DataFrame as a parquet file at the specified path.

Example: Saving a DataFrame to a Parquet File

Here is a basic example demonstrating saving a Pandas DataFrame object into aparquet file format using theDataFrame.to_parquet() method.

import pandas as pd# Create a DataFramedf = pd.DataFrame({"Col_1": range(5), "Col_2": range(5, 10)})print("Original DataFrame:")print(df)# Save the DataFrame as a parquet filedf.to_parquet("df_parquet_file.parquet")print("\nDataFrame is successfully saved as a parquet file.")

When we run above program, it produces following result −

Original DataFrame:
Col_1Col_2
005
116
227
338
449
DataFrame is successfully saved as a parquet file.
If you visit the folder where the parquet files are saved, you can observe the generated parquet file.

Example: Saving Parquet file with Compression

The following example shows how to use theto_parquet() method for saving the Pandas DataFrame as a parquet file with compression.

import pandas as pd# Create a DataFramedf = pd.DataFrame({"Col_1": range(5), "Col_2": range(5, 10)})print("Original DataFrame:")print(df)# Save the DataFrame to a parquet file with compressiondf.to_parquet('compressed_data.parquet.gzip', compression='gzip')print("\nDataFrame is saved as parquet format with compression..")

Following is an output of the above code −

Original DataFrame:
Col_1Col_2
005
116
227
338
449
DataFrame is saved as parquet format with compression..

Example: Writing Parquet file Without Index

When writing a Pandas DataFrame as aParquet file format you can ignore the index by setting theDataFrame.to_parquet() methodindex parameter value toFalse.

import pandas as pd# Create a DataFramedf = pd.DataFrame({"Col_1": [1, 2, 3, 4, 5],"Col_2": ["a", "b", "c", "d", "e"]}, index=['r1', 'r2', 'r3', 'r4', 'r5'])print("Original DataFrame:")print(df)# Save without DataFrame indexdf.to_parquet('df_parquet_no_index.parquet', index=False)# Read the 'df_parquet_no_index' fileoutput = pd.read_parquet('df_parquet_no_index.parquet')print('Saved DataFrame without index:')print(output)

Following is an output of the above code −

Original DataFrame:
Col_1Col_2
r11a
r22b
r33c
r44d
r55e
Saved DataFrame without index:
Col_1Col_2
01a
12b
23c
34d
45e

Example: Save Pandas DataFrame to In-Memory Parquet

This example saves a Pandas DataFrame object into a in-memory parquet file using theDataFrame.to_parquet() method.

import pandas as pdimport io# Create a Pandas DataFrame df = pd.DataFrame(data={'Col_1': [1, 2], 'Col_2': [3.0, 4.0]})# Display the Input DataFrameprint("Original DataFrame:")print(df)# Save the DataFrame as In-Memory parquetbuf = io.BytesIO()df.to_parquet(buf)output = pd.read_parquet(buf)print('Saved DataFrame as an in-memory Parquet file:')print(output)

While executing the above code we get the following output −

Original DataFrame:
Col_1Col_2
013.0
124.0
Saved DataFrame as an in-memory Parquet file:
Col_1Col_2
013.0
124.0
python_pandas_io_tool.htm
Print Page
Advertisements

[8]ページ先頭

©2009-2025 Movatter.jp