bigframes.pandas.read_parquet#
- bigframes.pandas.read_parquet(path:str|IO[bytes],*,engine:str='auto',write_engine:Literal['default','bigquery_inline','bigquery_load','bigquery_streaming','bigquery_write','_deferred']='default')→DataFrame[source]#
Load a Parquet object from the file path (local or Cloud Storage), returning a DataFrame.
Note
This method will not guarantee the same ordering as the file.Instead, set a serialized index column as the index and sort bythat in the resulting DataFrame.
Note
For non-“bigquery” engine, data is inlined in the query SQL if it issmall enough (roughly 5MB or less in memory). Larger size data isloaded to a BigQuery table instead.
Examples:
>>>importbigframes.pandasasbpd
>>>gcs_path="gs://cloud-samples-data/bigquery/us-states/us-states.parquet">>>df=bpd.read_parquet(path=gcs_path,engine="bigquery")
- Parameters:
- Returns:
A BigQuery DataFrames.
- Return type:
On this page