Package Methods (2.35.0)

Summary of entries of Methods for bigquerystorage.

google.cloud.bigquery_storage_v1.client.BigQueryReadClient

BigQueryReadClient(**kwargs)

google.cloud.bigquery_storage_v1.client.BigQueryReadClient.__exit__

__exit__(type,value,traceback)

Releases underlying transport's resources.

See more:google.cloud.bigquery_storage_v1.client.BigQueryReadClient.exit

google.cloud.bigquery_storage_v1.client.BigQueryReadClient.common_billing_account_path

common_billing_account_path(billing_account:str)->str

Returns a fully-qualified billing_account string.

See more:google.cloud.bigquery_storage_v1.client.BigQueryReadClient.common_billing_account_path

google.cloud.bigquery_storage_v1.client.BigQueryReadClient.common_folder_path

common_folder_path(folder:str)->str

google.cloud.bigquery_storage_v1.client.BigQueryReadClient.common_location_path

common_location_path(project:str,location:str)->str

google.cloud.bigquery_storage_v1.client.BigQueryReadClient.common_organization_path

common_organization_path(organization:str)->str

google.cloud.bigquery_storage_v1.client.BigQueryReadClient.common_project_path

common_project_path(project:str)->str

google.cloud.bigquery_storage_v1.client.BigQueryReadClient.create_read_session

create_read_session(request:typing.Optional[typing.Union[google.cloud.bigquery_storage_v1.types.storage.CreateReadSessionRequest,dict,]]=None,*,parent:typing.Optional[str]=None,read_session:typing.Optional[google.cloud.bigquery_storage_v1.types.stream.ReadSession]=None,max_stream_count:typing.Optional[int]=None,retry:typing.Optional[typing.Union[google.api_core.retry.retry_unary.Retry,google.api_core.gapic_v1.method._MethodDefault,]]=_MethodDefault._DEFAULT_VALUE,timeout:typing.Union[float,object]=_MethodDefault._DEFAULT_VALUE,metadata:typing.Sequence[typing.Tuple[str,typing.Union[str,bytes]]]=())->google.cloud.bigquery_storage_v1.types.stream.ReadSession

google.cloud.bigquery_storage_v1.client.BigQueryReadClient.from_service_account_file

from_service_account_file(filename:str,*args,**kwargs)

Creates an instance of this client using the provided credentials file.

See more:google.cloud.bigquery_storage_v1.client.BigQueryReadClient.from_service_account_file

google.cloud.bigquery_storage_v1.client.BigQueryReadClient.from_service_account_info

from_service_account_info(info:dict,*args,**kwargs)

Creates an instance of this client using the provided credentials info.

See more:google.cloud.bigquery_storage_v1.client.BigQueryReadClient.from_service_account_info

google.cloud.bigquery_storage_v1.client.BigQueryReadClient.from_service_account_json

from_service_account_json(filename:str,*args,**kwargs)

Creates an instance of this client using the provided credentials file.

See more:google.cloud.bigquery_storage_v1.client.BigQueryReadClient.from_service_account_json

google.cloud.bigquery_storage_v1.client.BigQueryReadClient.get_mtls_endpoint_and_cert_source

get_mtls_endpoint_and_cert_source(client_options:typing.Optional[google.api_core.client_options.ClientOptions]=None,)

google.cloud.bigquery_storage_v1.client.BigQueryReadClient.parse_common_billing_account_path

parse_common_billing_account_path(path:str)->typing.Dict[str,str]

Parse a billing_account path into its component segments.

See more:google.cloud.bigquery_storage_v1.client.BigQueryReadClient.parse_common_billing_account_path

google.cloud.bigquery_storage_v1.client.BigQueryReadClient.parse_common_folder_path

parse_common_folder_path(path:str)->typing.Dict[str,str]

Parse a folder path into its component segments.

See more:google.cloud.bigquery_storage_v1.client.BigQueryReadClient.parse_common_folder_path

google.cloud.bigquery_storage_v1.client.BigQueryReadClient.parse_common_location_path

parse_common_location_path(path:str)->typing.Dict[str,str]

Parse a location path into its component segments.

See more:google.cloud.bigquery_storage_v1.client.BigQueryReadClient.parse_common_location_path

google.cloud.bigquery_storage_v1.client.BigQueryReadClient.parse_common_organization_path

parse_common_organization_path(path:str)->typing.Dict[str,str]

Parse a organization path into its component segments.

See more:google.cloud.bigquery_storage_v1.client.BigQueryReadClient.parse_common_organization_path

google.cloud.bigquery_storage_v1.client.BigQueryReadClient.parse_common_project_path

parse_common_project_path(path:str)->typing.Dict[str,str]

Parse a project path into its component segments.

See more:google.cloud.bigquery_storage_v1.client.BigQueryReadClient.parse_common_project_path

google.cloud.bigquery_storage_v1.client.BigQueryReadClient.parse_read_session_path

parse_read_session_path(path:str)->typing.Dict[str,str]

Parses a read_session path into its component segments.

See more:google.cloud.bigquery_storage_v1.client.BigQueryReadClient.parse_read_session_path

google.cloud.bigquery_storage_v1.client.BigQueryReadClient.parse_read_stream_path

parse_read_stream_path(path:str)->typing.Dict[str,str]

Parses a read_stream path into its component segments.

See more:google.cloud.bigquery_storage_v1.client.BigQueryReadClient.parse_read_stream_path

google.cloud.bigquery_storage_v1.client.BigQueryReadClient.parse_table_path

parse_table_path(path:str)->typing.Dict[str,str]

Parses a table path into its component segments.

See more:google.cloud.bigquery_storage_v1.client.BigQueryReadClient.parse_table_path

google.cloud.bigquery_storage_v1.client.BigQueryReadClient.read_rows

read_rows(name,offset=0,retry=_MethodDefault._DEFAULT_VALUE,timeout=_MethodDefault._DEFAULT_VALUE,metadata=(),retry_delay_callback=None,)

Reads rows from the table in the format prescribed by the readsession.

See more:google.cloud.bigquery_storage_v1.client.BigQueryReadClient.read_rows

google.cloud.bigquery_storage_v1.client.BigQueryReadClient.read_session_path

read_session_path(project:str,location:str,session:str)->str

Returns a fully-qualified read_session string.

See more:google.cloud.bigquery_storage_v1.client.BigQueryReadClient.read_session_path

google.cloud.bigquery_storage_v1.client.BigQueryReadClient.read_stream_path

read_stream_path(project:str,location:str,session:str,stream:str)->str

Returns a fully-qualified read_stream string.

See more:google.cloud.bigquery_storage_v1.client.BigQueryReadClient.read_stream_path

google.cloud.bigquery_storage_v1.client.BigQueryReadClient.split_read_stream

split_read_stream(request:typing.Optional[typing.Union[google.cloud.bigquery_storage_v1.types.storage.SplitReadStreamRequest,dict]]=None,*,retry:typing.Optional[typing.Union[google.api_core.retry.retry_unary.Retry,google.api_core.gapic_v1.method._MethodDefault,]]=_MethodDefault._DEFAULT_VALUE,timeout:typing.Union[float,object]=_MethodDefault._DEFAULT_VALUE,metadata:typing.Sequence[typing.Tuple[str,typing.Union[str,bytes]]]=())->google.cloud.bigquery_storage_v1.types.storage.SplitReadStreamResponse

Splits a givenReadStream into twoReadStream objects.

See more:google.cloud.bigquery_storage_v1.client.BigQueryReadClient.split_read_stream

google.cloud.bigquery_storage_v1.client.BigQueryReadClient.table_path

table_path(project:str,dataset:str,table:str)->str

Returns a fully-qualified table string.

See more:google.cloud.bigquery_storage_v1.client.BigQueryReadClient.table_path

google.cloud.bigquery_storage_v1.client.BigQueryWriteClient

BigQueryWriteClient(**kwargs)

Instantiates the big query write client.

See more:google.cloud.bigquery_storage_v1.client.BigQueryWriteClient

google.cloud.bigquery_storage_v1.client.BigQueryWriteClient.__exit__

__exit__(type,value,traceback)

Releases underlying transport's resources.

See more:google.cloud.bigquery_storage_v1.client.BigQueryWriteClient.exit

google.cloud.bigquery_storage_v1.client.BigQueryWriteClient.append_rows

append_rows(requests:typing.Optional[typing.Iterator[google.cloud.bigquery_storage_v1.types.storage.AppendRowsRequest]]=None,*,retry:typing.Optional[typing.Union[google.api_core.retry.retry_unary.Retry,google.api_core.gapic_v1.method._MethodDefault,]]=_MethodDefault._DEFAULT_VALUE,timeout:typing.Union[float,object]=_MethodDefault._DEFAULT_VALUE,metadata:typing.Sequence[typing.Tuple[str,typing.Union[str,bytes]]]=())->typing.Iterable[google.cloud.bigquery_storage_v1.types.storage.AppendRowsResponse]

google.cloud.bigquery_storage_v1.client.BigQueryWriteClient.batch_commit_write_streams

batch_commit_write_streams(request:typing.Optional[typing.Union[google.cloud.bigquery_storage_v1.types.storage.BatchCommitWriteStreamsRequest,dict,]]=None,*,parent:typing.Optional[str]=None,retry:typing.Optional[typing.Union[google.api_core.retry.retry_unary.Retry,google.api_core.gapic_v1.method._MethodDefault,]]=_MethodDefault._DEFAULT_VALUE,timeout:typing.Union[float,object]=_MethodDefault._DEFAULT_VALUE,metadata:typing.Sequence[typing.Tuple[str,typing.Union[str,bytes]]]=())->google.cloud.bigquery_storage_v1.types.storage.BatchCommitWriteStreamsResponse

Atomically commits a group ofPENDING streams that belong tothe sameparent table.

See more:google.cloud.bigquery_storage_v1.client.BigQueryWriteClient.batch_commit_write_streams

google.cloud.bigquery_storage_v1.client.BigQueryWriteClient.common_billing_account_path

common_billing_account_path(billing_account:str)->str

Returns a fully-qualified billing_account string.

See more:google.cloud.bigquery_storage_v1.client.BigQueryWriteClient.common_billing_account_path

google.cloud.bigquery_storage_v1.client.BigQueryWriteClient.common_folder_path

common_folder_path(folder:str)->str

google.cloud.bigquery_storage_v1.client.BigQueryWriteClient.common_location_path

common_location_path(project:str,location:str)->str

google.cloud.bigquery_storage_v1.client.BigQueryWriteClient.common_organization_path

common_organization_path(organization:str)->str

google.cloud.bigquery_storage_v1.client.BigQueryWriteClient.common_project_path

common_project_path(project:str)->str

google.cloud.bigquery_storage_v1.client.BigQueryWriteClient.create_write_stream

create_write_stream(request:typing.Optional[typing.Union[google.cloud.bigquery_storage_v1.types.storage.CreateWriteStreamRequest,dict,]]=None,*,parent:typing.Optional[str]=None,write_stream:typing.Optional[google.cloud.bigquery_storage_v1.types.stream.WriteStream]=None,retry:typing.Optional[typing.Union[google.api_core.retry.retry_unary.Retry,google.api_core.gapic_v1.method._MethodDefault,]]=_MethodDefault._DEFAULT_VALUE,timeout:typing.Union[float,object]=_MethodDefault._DEFAULT_VALUE,metadata:typing.Sequence[typing.Tuple[str,typing.Union[str,bytes]]]=())->google.cloud.bigquery_storage_v1.types.stream.WriteStream

google.cloud.bigquery_storage_v1.client.BigQueryWriteClient.finalize_write_stream

finalize_write_stream(request:typing.Optional[typing.Union[google.cloud.bigquery_storage_v1.types.storage.FinalizeWriteStreamRequest,dict,]]=None,*,name:typing.Optional[str]=None,retry:typing.Optional[typing.Union[google.api_core.retry.retry_unary.Retry,google.api_core.gapic_v1.method._MethodDefault,]]=_MethodDefault._DEFAULT_VALUE,timeout:typing.Union[float,object]=_MethodDefault._DEFAULT_VALUE,metadata:typing.Sequence[typing.Tuple[str,typing.Union[str,bytes]]]=())->google.cloud.bigquery_storage_v1.types.storage.FinalizeWriteStreamResponse

Finalize a write stream so that no new data can be appended tothe stream.

See more:google.cloud.bigquery_storage_v1.client.BigQueryWriteClient.finalize_write_stream

google.cloud.bigquery_storage_v1.client.BigQueryWriteClient.flush_rows

flush_rows(request:typing.Optional[typing.Union[google.cloud.bigquery_storage_v1.types.storage.FlushRowsRequest,dict]]=None,*,write_stream:typing.Optional[str]=None,retry:typing.Optional[typing.Union[google.api_core.retry.retry_unary.Retry,google.api_core.gapic_v1.method._MethodDefault,]]=_MethodDefault._DEFAULT_VALUE,timeout:typing.Union[float,object]=_MethodDefault._DEFAULT_VALUE,metadata:typing.Sequence[typing.Tuple[str,typing.Union[str,bytes]]]=())->google.cloud.bigquery_storage_v1.types.storage.FlushRowsResponse

google.cloud.bigquery_storage_v1.client.BigQueryWriteClient.from_service_account_file

from_service_account_file(filename:str,*args,**kwargs)

Creates an instance of this client using the provided credentials file.

See more:google.cloud.bigquery_storage_v1.client.BigQueryWriteClient.from_service_account_file

google.cloud.bigquery_storage_v1.client.BigQueryWriteClient.from_service_account_info

from_service_account_info(info:dict,*args,**kwargs)

Creates an instance of this client using the provided credentials info.

See more:google.cloud.bigquery_storage_v1.client.BigQueryWriteClient.from_service_account_info

google.cloud.bigquery_storage_v1.client.BigQueryWriteClient.from_service_account_json

from_service_account_json(filename:str,*args,**kwargs)

Creates an instance of this client using the provided credentials file.

See more:google.cloud.bigquery_storage_v1.client.BigQueryWriteClient.from_service_account_json

google.cloud.bigquery_storage_v1.client.BigQueryWriteClient.get_mtls_endpoint_and_cert_source

get_mtls_endpoint_and_cert_source(client_options:typing.Optional[google.api_core.client_options.ClientOptions]=None,)

google.cloud.bigquery_storage_v1.client.BigQueryWriteClient.get_write_stream

get_write_stream(request:typing.Optional[typing.Union[google.cloud.bigquery_storage_v1.types.storage.GetWriteStreamRequest,dict]]=None,*,name:typing.Optional[str]=None,retry:typing.Optional[typing.Union[google.api_core.retry.retry_unary.Retry,google.api_core.gapic_v1.method._MethodDefault,]]=_MethodDefault._DEFAULT_VALUE,timeout:typing.Union[float,object]=_MethodDefault._DEFAULT_VALUE,metadata:typing.Sequence[typing.Tuple[str,typing.Union[str,bytes]]]=())->google.cloud.bigquery_storage_v1.types.stream.WriteStream

google.cloud.bigquery_storage_v1.client.BigQueryWriteClient.parse_common_billing_account_path

parse_common_billing_account_path(path:str)->typing.Dict[str,str]

Parse a billing_account path into its component segments.

See more:google.cloud.bigquery_storage_v1.client.BigQueryWriteClient.parse_common_billing_account_path

google.cloud.bigquery_storage_v1.client.BigQueryWriteClient.parse_common_folder_path

parse_common_folder_path(path:str)->typing.Dict[str,str]

Parse a folder path into its component segments.

See more:google.cloud.bigquery_storage_v1.client.BigQueryWriteClient.parse_common_folder_path

google.cloud.bigquery_storage_v1.client.BigQueryWriteClient.parse_common_location_path

parse_common_location_path(path:str)->typing.Dict[str,str]

Parse a location path into its component segments.

See more:google.cloud.bigquery_storage_v1.client.BigQueryWriteClient.parse_common_location_path

google.cloud.bigquery_storage_v1.client.BigQueryWriteClient.parse_common_organization_path

parse_common_organization_path(path:str)->typing.Dict[str,str]

Parse a organization path into its component segments.

See more:google.cloud.bigquery_storage_v1.client.BigQueryWriteClient.parse_common_organization_path

google.cloud.bigquery_storage_v1.client.BigQueryWriteClient.parse_common_project_path

parse_common_project_path(path:str)->typing.Dict[str,str]

Parse a project path into its component segments.

See more:google.cloud.bigquery_storage_v1.client.BigQueryWriteClient.parse_common_project_path

google.cloud.bigquery_storage_v1.client.BigQueryWriteClient.parse_table_path

parse_table_path(path:str)->typing.Dict[str,str]

Parses a table path into its component segments.

See more:google.cloud.bigquery_storage_v1.client.BigQueryWriteClient.parse_table_path

google.cloud.bigquery_storage_v1.client.BigQueryWriteClient.parse_write_stream_path

parse_write_stream_path(path:str)->typing.Dict[str,str]

Parses a write_stream path into its component segments.

See more:google.cloud.bigquery_storage_v1.client.BigQueryWriteClient.parse_write_stream_path

google.cloud.bigquery_storage_v1.client.BigQueryWriteClient.table_path

table_path(project:str,dataset:str,table:str)->str

Returns a fully-qualified table string.

See more:google.cloud.bigquery_storage_v1.client.BigQueryWriteClient.table_path

google.cloud.bigquery_storage_v1.client.BigQueryWriteClient.write_stream_path

write_stream_path(project:str,dataset:str,table:str,stream:str)->str

Returns a fully-qualified write_stream string.

See more:google.cloud.bigquery_storage_v1.client.BigQueryWriteClient.write_stream_path

google.cloud.bigquery_storage_v1.reader.ReadRowsIterable.__iter__

__iter__()

Iterator for each row in all pages.

See more:google.cloud.bigquery_storage_v1.reader.ReadRowsIterable.iter

google.cloud.bigquery_storage_v1.reader.ReadRowsIterable.to_arrow

to_arrow()

Create apyarrow.Table of all rows in the stream.

See more:google.cloud.bigquery_storage_v1.reader.ReadRowsIterable.to_arrow

google.cloud.bigquery_storage_v1.reader.ReadRowsIterable.to_dataframe

to_dataframe(dtypes=None)

Create apandas.DataFrame of all rows in the stream.

See more:google.cloud.bigquery_storage_v1.reader.ReadRowsIterable.to_dataframe

google.cloud.bigquery_storage_v1.reader.ReadRowsPage.__iter__

__iter__()

AReadRowsPage is an iterator.

See more:google.cloud.bigquery_storage_v1.reader.ReadRowsPage.iter

google.cloud.bigquery_storage_v1.reader.ReadRowsPage.__next__

__next__()

Get the next row in the page.

See more:google.cloud.bigquery_storage_v1.reader.ReadRowsPage.next

google.cloud.bigquery_storage_v1.reader.ReadRowsPage.next

next()

Get the next row in the page.

See more:google.cloud.bigquery_storage_v1.reader.ReadRowsPage.next

google.cloud.bigquery_storage_v1.reader.ReadRowsPage.to_arrow

to_arrow()

Create anpyarrow.RecordBatch of rows in the page.

See more:google.cloud.bigquery_storage_v1.reader.ReadRowsPage.to_arrow

google.cloud.bigquery_storage_v1.reader.ReadRowsPage.to_dataframe

to_dataframe(dtypes=None)

Create apandas.DataFrame of rows in the page.

See more:google.cloud.bigquery_storage_v1.reader.ReadRowsPage.to_dataframe

google.cloud.bigquery_storage_v1.reader.ReadRowsStream

ReadRowsStream(client,name,offset,read_rows_kwargs,retry_delay_callback=None)

Construct a ReadRowsStream.

See more:google.cloud.bigquery_storage_v1.reader.ReadRowsStream

google.cloud.bigquery_storage_v1.reader.ReadRowsStream.__iter__

__iter__()

google.cloud.bigquery_storage_v1.reader.ReadRowsStream.rows

rows(read_session=None)

Iterate over all rows in the stream.

See more:google.cloud.bigquery_storage_v1.reader.ReadRowsStream.rows

google.cloud.bigquery_storage_v1.reader.ReadRowsStream.to_arrow

to_arrow(read_session=None)

Create apyarrow.Table of all rows in the stream.

See more:google.cloud.bigquery_storage_v1.reader.ReadRowsStream.to_arrow

google.cloud.bigquery_storage_v1.reader.ReadRowsStream.to_dataframe

to_dataframe(read_session=None,dtypes=None)

Create apandas.DataFrame of all rows in the stream.

See more:google.cloud.bigquery_storage_v1.reader.ReadRowsStream.to_dataframe

google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadAsyncClient

BigQueryReadAsyncClient(*,credentials:typing.Optional[google.auth.credentials.Credentials]=None,transport:typing.Optional[typing.Union[str,google.cloud.bigquery_storage_v1.services.big_query_read.transports.base.BigQueryReadTransport,typing.Callable[[...],google.cloud.bigquery_storage_v1.services.big_query_read.transports.base.BigQueryReadTransport,],]]="grpc_asyncio",client_options:typing.Optional[google.api_core.client_options.ClientOptions]=None,client_info:google.api_core.gapic_v1.client_info.ClientInfo=google.api_core.gapic_v1.client_info.ClientInfo)

Instantiates the big query read async client.

See more:google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadAsyncClient

google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadAsyncClient.common_billing_account_path

common_billing_account_path(billing_account:str)->str

google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadAsyncClient.common_folder_path

common_folder_path(folder:str)->str

google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadAsyncClient.common_location_path

common_location_path(project:str,location:str)->str

google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadAsyncClient.common_organization_path

common_organization_path(organization:str)->str

google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadAsyncClient.common_project_path

common_project_path(project:str)->str

google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadAsyncClient.create_read_session

create_read_session(request:typing.Optional[typing.Union[google.cloud.bigquery_storage_v1.types.storage.CreateReadSessionRequest,dict,]]=None,*,parent:typing.Optional[str]=None,read_session:typing.Optional[google.cloud.bigquery_storage_v1.types.stream.ReadSession]=None,max_stream_count:typing.Optional[int]=None,retry:typing.Optional[typing.Union[google.api_core.retry.retry_unary_async.AsyncRetry,google.api_core.gapic_v1.method._MethodDefault,]]=_MethodDefault._DEFAULT_VALUE,timeout:typing.Union[float,object]=_MethodDefault._DEFAULT_VALUE,metadata:typing.Sequence[typing.Tuple[str,typing.Union[str,bytes]]]=())->google.cloud.bigquery_storage_v1.types.stream.ReadSession

google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadAsyncClient.from_service_account_file

from_service_account_file(filename:str,*args,**kwargs)

Creates an instance of this client using the provided credentials file.

See more:google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadAsyncClient.from_service_account_file

google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadAsyncClient.from_service_account_info

from_service_account_info(info:dict,*args,**kwargs)

Creates an instance of this client using the provided credentials info.

See more:google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadAsyncClient.from_service_account_info

google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadAsyncClient.from_service_account_json

from_service_account_json(filename:str,*args,**kwargs)

Creates an instance of this client using the provided credentials file.

See more:google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadAsyncClient.from_service_account_json

google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadAsyncClient.get_mtls_endpoint_and_cert_source

get_mtls_endpoint_and_cert_source(client_options:typing.Optional[google.api_core.client_options.ClientOptions]=None,)

google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadAsyncClient.get_transport_class

get_transport_class(label:typing.Optional[str]=None,)->typing.Type[google.cloud.bigquery_storage_v1.services.big_query_read.transports.base.BigQueryReadTransport]

google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadAsyncClient.parse_common_billing_account_path

parse_common_billing_account_path(path:str)->typing.Dict[str,str]

google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadAsyncClient.parse_common_folder_path

parse_common_folder_path(path:str)->typing.Dict[str,str]

google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadAsyncClient.parse_common_location_path

parse_common_location_path(path:str)->typing.Dict[str,str]

google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadAsyncClient.parse_common_organization_path

parse_common_organization_path(path:str)->typing.Dict[str,str]

google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadAsyncClient.parse_common_project_path

parse_common_project_path(path:str)->typing.Dict[str,str]

google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadAsyncClient.parse_read_session_path

parse_read_session_path(path:str)->typing.Dict[str,str]

google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadAsyncClient.parse_read_stream_path

parse_read_stream_path(path:str)->typing.Dict[str,str]

google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadAsyncClient.parse_table_path

parse_table_path(path:str)->typing.Dict[str,str]

google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadAsyncClient.read_rows

read_rows(request:typing.Optional[typing.Union[google.cloud.bigquery_storage_v1.types.storage.ReadRowsRequest,dict]]=None,*,read_stream:typing.Optional[str]=None,offset:typing.Optional[int]=None,retry:typing.Optional[typing.Union[google.api_core.retry.retry_unary_async.AsyncRetry,google.api_core.gapic_v1.method._MethodDefault,]]=_MethodDefault._DEFAULT_VALUE,timeout:typing.Union[float,object]=_MethodDefault._DEFAULT_VALUE,metadata:typing.Sequence[typing.Tuple[str,typing.Union[str,bytes]]]=())->typing.Awaitable[typing.AsyncIterable[google.cloud.bigquery_storage_v1.types.storage.ReadRowsResponse]]

Reads rows from the stream in the format prescribedby the ReadSession.

See more:google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadAsyncClient.read_rows

google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadAsyncClient.read_session_path

read_session_path(project:str,location:str,session:str)->str

google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadAsyncClient.read_stream_path

read_stream_path(project:str,location:str,session:str,stream:str)->str

google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadAsyncClient.split_read_stream

split_read_stream(request:typing.Optional[typing.Union[google.cloud.bigquery_storage_v1.types.storage.SplitReadStreamRequest,dict]]=None,*,retry:typing.Optional[typing.Union[google.api_core.retry.retry_unary_async.AsyncRetry,google.api_core.gapic_v1.method._MethodDefault,]]=_MethodDefault._DEFAULT_VALUE,timeout:typing.Union[float,object]=_MethodDefault._DEFAULT_VALUE,metadata:typing.Sequence[typing.Tuple[str,typing.Union[str,bytes]]]=())->google.cloud.bigquery_storage_v1.types.storage.SplitReadStreamResponse

google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadAsyncClient.table_path

table_path(project:str,dataset:str,table:str)->str

google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadClient

BigQueryReadClient(*,credentials:typing.Optional[google.auth.credentials.Credentials]=None,transport:typing.Optional[typing.Union[str,google.cloud.bigquery_storage_v1.services.big_query_read.transports.base.BigQueryReadTransport,typing.Callable[[...],google.cloud.bigquery_storage_v1.services.big_query_read.transports.base.BigQueryReadTransport,],]]=None,client_options:typing.Optional[typing.Union[google.api_core.client_options.ClientOptions,dict]]=None,client_info:google.api_core.gapic_v1.client_info.ClientInfo=google.api_core.gapic_v1.client_info.ClientInfo)

google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadClient.__exit__

__exit__(type,value,traceback)

google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadClient.common_billing_account_path

common_billing_account_path(billing_account:str)->str

google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadClient.common_folder_path

common_folder_path(folder:str)->str

google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadClient.common_location_path

common_location_path(project:str,location:str)->str

google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadClient.common_organization_path

common_organization_path(organization:str)->str

google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadClient.common_project_path

common_project_path(project:str)->str

google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadClient.create_read_session

create_read_session(request:typing.Optional[typing.Union[google.cloud.bigquery_storage_v1.types.storage.CreateReadSessionRequest,dict,]]=None,*,parent:typing.Optional[str]=None,read_session:typing.Optional[google.cloud.bigquery_storage_v1.types.stream.ReadSession]=None,max_stream_count:typing.Optional[int]=None,retry:typing.Optional[typing.Union[google.api_core.retry.retry_unary.Retry,google.api_core.gapic_v1.method._MethodDefault,]]=_MethodDefault._DEFAULT_VALUE,timeout:typing.Union[float,object]=_MethodDefault._DEFAULT_VALUE,metadata:typing.Sequence[typing.Tuple[str,typing.Union[str,bytes]]]=())->google.cloud.bigquery_storage_v1.types.stream.ReadSession

google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadClient.from_service_account_file

from_service_account_file(filename:str,*args,**kwargs)

Creates an instance of this client using the provided credentials file.

See more:google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadClient.from_service_account_file

google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadClient.from_service_account_info

from_service_account_info(info:dict,*args,**kwargs)

Creates an instance of this client using the provided credentials info.

See more:google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadClient.from_service_account_info

google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadClient.from_service_account_json

from_service_account_json(filename:str,*args,**kwargs)

Creates an instance of this client using the provided credentials file.

See more:google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadClient.from_service_account_json

google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadClient.get_mtls_endpoint_and_cert_source

get_mtls_endpoint_and_cert_source(client_options:typing.Optional[google.api_core.client_options.ClientOptions]=None,)

google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadClient.parse_common_billing_account_path

parse_common_billing_account_path(path:str)->typing.Dict[str,str]

google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadClient.parse_common_folder_path

parse_common_folder_path(path:str)->typing.Dict[str,str]

google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadClient.parse_common_location_path

parse_common_location_path(path:str)->typing.Dict[str,str]

google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadClient.parse_common_organization_path

parse_common_organization_path(path:str)->typing.Dict[str,str]

google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadClient.parse_common_project_path

parse_common_project_path(path:str)->typing.Dict[str,str]

google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadClient.parse_read_session_path

parse_read_session_path(path:str)->typing.Dict[str,str]

google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadClient.parse_read_stream_path

parse_read_stream_path(path:str)->typing.Dict[str,str]

google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadClient.parse_table_path

parse_table_path(path:str)->typing.Dict[str,str]

google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadClient.read_rows

read_rows(request:typing.Optional[typing.Union[google.cloud.bigquery_storage_v1.types.storage.ReadRowsRequest,dict]]=None,*,read_stream:typing.Optional[str]=None,offset:typing.Optional[int]=None,retry:typing.Optional[typing.Union[google.api_core.retry.retry_unary.Retry,google.api_core.gapic_v1.method._MethodDefault,]]=_MethodDefault._DEFAULT_VALUE,timeout:typing.Union[float,object]=_MethodDefault._DEFAULT_VALUE,metadata:typing.Sequence[typing.Tuple[str,typing.Union[str,bytes]]]=())->typing.Iterable[google.cloud.bigquery_storage_v1.types.storage.ReadRowsResponse]

Reads rows from the stream in the format prescribedby the ReadSession.

See more:google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadClient.read_rows

google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadClient.read_session_path

read_session_path(project:str,location:str,session:str)->str

google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadClient.read_stream_path

read_stream_path(project:str,location:str,session:str,stream:str)->str

google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadClient.split_read_stream

split_read_stream(request:typing.Optional[typing.Union[google.cloud.bigquery_storage_v1.types.storage.SplitReadStreamRequest,dict]]=None,*,retry:typing.Optional[typing.Union[google.api_core.retry.retry_unary.Retry,google.api_core.gapic_v1.method._MethodDefault,]]=_MethodDefault._DEFAULT_VALUE,timeout:typing.Union[float,object]=_MethodDefault._DEFAULT_VALUE,metadata:typing.Sequence[typing.Tuple[str,typing.Union[str,bytes]]]=())->google.cloud.bigquery_storage_v1.types.storage.SplitReadStreamResponse

google.cloud.bigquery_storage_v1.services.big_query_read.BigQueryReadClient.table_path

table_path(project:str,dataset:str,table:str)->str

google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteAsyncClient

BigQueryWriteAsyncClient(*,credentials:typing.Optional[google.auth.credentials.Credentials]=None,transport:typing.Optional[typing.Union[str,google.cloud.bigquery_storage_v1.services.big_query_write.transports.base.BigQueryWriteTransport,typing.Callable[[...],google.cloud.bigquery_storage_v1.services.big_query_write.transports.base.BigQueryWriteTransport,],]]="grpc_asyncio",client_options:typing.Optional[google.api_core.client_options.ClientOptions]=None,client_info:google.api_core.gapic_v1.client_info.ClientInfo=google.api_core.gapic_v1.client_info.ClientInfo)

Instantiates the big query write async client.

See more:google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteAsyncClient

google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteAsyncClient.append_rows

append_rows(requests:typing.Optional[typing.AsyncIterator[google.cloud.bigquery_storage_v1.types.storage.AppendRowsRequest]]=None,*,retry:typing.Optional[typing.Union[google.api_core.retry.retry_unary_async.AsyncRetry,google.api_core.gapic_v1.method._MethodDefault,]]=_MethodDefault._DEFAULT_VALUE,timeout:typing.Union[float,object]=_MethodDefault._DEFAULT_VALUE,metadata:typing.Sequence[typing.Tuple[str,typing.Union[str,bytes]]]=())->typing.Awaitable[typing.AsyncIterable[google.cloud.bigquery_storage_v1.types.storage.AppendRowsResponse]]

google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteAsyncClient.batch_commit_write_streams

batch_commit_write_streams(request:typing.Optional[typing.Union[google.cloud.bigquery_storage_v1.types.storage.BatchCommitWriteStreamsRequest,dict,]]=None,*,parent:typing.Optional[str]=None,retry:typing.Optional[typing.Union[google.api_core.retry.retry_unary_async.AsyncRetry,google.api_core.gapic_v1.method._MethodDefault,]]=_MethodDefault._DEFAULT_VALUE,timeout:typing.Union[float,object]=_MethodDefault._DEFAULT_VALUE,metadata:typing.Sequence[typing.Tuple[str,typing.Union[str,bytes]]]=())->google.cloud.bigquery_storage_v1.types.storage.BatchCommitWriteStreamsResponse

Atomically commits a group ofPENDING streams that belong tothe sameparent table.

See more:google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteAsyncClient.batch_commit_write_streams

google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteAsyncClient.common_billing_account_path

common_billing_account_path(billing_account:str)->str

google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteAsyncClient.common_folder_path

common_folder_path(folder:str)->str

google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteAsyncClient.common_location_path

common_location_path(project:str,location:str)->str

google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteAsyncClient.common_organization_path

common_organization_path(organization:str)->str

google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteAsyncClient.common_project_path

common_project_path(project:str)->str

google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteAsyncClient.create_write_stream

create_write_stream(request:typing.Optional[typing.Union[google.cloud.bigquery_storage_v1.types.storage.CreateWriteStreamRequest,dict,]]=None,*,parent:typing.Optional[str]=None,write_stream:typing.Optional[google.cloud.bigquery_storage_v1.types.stream.WriteStream]=None,retry:typing.Optional[typing.Union[google.api_core.retry.retry_unary_async.AsyncRetry,google.api_core.gapic_v1.method._MethodDefault,]]=_MethodDefault._DEFAULT_VALUE,timeout:typing.Union[float,object]=_MethodDefault._DEFAULT_VALUE,metadata:typing.Sequence[typing.Tuple[str,typing.Union[str,bytes]]]=())->google.cloud.bigquery_storage_v1.types.stream.WriteStream

google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteAsyncClient.finalize_write_stream

finalize_write_stream(request:typing.Optional[typing.Union[google.cloud.bigquery_storage_v1.types.storage.FinalizeWriteStreamRequest,dict,]]=None,*,name:typing.Optional[str]=None,retry:typing.Optional[typing.Union[google.api_core.retry.retry_unary_async.AsyncRetry,google.api_core.gapic_v1.method._MethodDefault,]]=_MethodDefault._DEFAULT_VALUE,timeout:typing.Union[float,object]=_MethodDefault._DEFAULT_VALUE,metadata:typing.Sequence[typing.Tuple[str,typing.Union[str,bytes]]]=())->google.cloud.bigquery_storage_v1.types.storage.FinalizeWriteStreamResponse

Finalize a write stream so that no new data can be appended tothe stream.

See more:google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteAsyncClient.finalize_write_stream

google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteAsyncClient.flush_rows

flush_rows(request:typing.Optional[typing.Union[google.cloud.bigquery_storage_v1.types.storage.FlushRowsRequest,dict]]=None,*,write_stream:typing.Optional[str]=None,retry:typing.Optional[typing.Union[google.api_core.retry.retry_unary_async.AsyncRetry,google.api_core.gapic_v1.method._MethodDefault,]]=_MethodDefault._DEFAULT_VALUE,timeout:typing.Union[float,object]=_MethodDefault._DEFAULT_VALUE,metadata:typing.Sequence[typing.Tuple[str,typing.Union[str,bytes]]]=())->google.cloud.bigquery_storage_v1.types.storage.FlushRowsResponse

google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteAsyncClient.from_service_account_file

from_service_account_file(filename:str,*args,**kwargs)

Creates an instance of this client using the provided credentials file.

See more:google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteAsyncClient.from_service_account_file

google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteAsyncClient.from_service_account_info

from_service_account_info(info:dict,*args,**kwargs)

Creates an instance of this client using the provided credentials info.

See more:google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteAsyncClient.from_service_account_info

google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteAsyncClient.from_service_account_json

from_service_account_json(filename:str,*args,**kwargs)

Creates an instance of this client using the provided credentials file.

See more:google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteAsyncClient.from_service_account_json

google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteAsyncClient.get_mtls_endpoint_and_cert_source

get_mtls_endpoint_and_cert_source(client_options:typing.Optional[google.api_core.client_options.ClientOptions]=None,)

google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteAsyncClient.get_transport_class

get_transport_class(label:typing.Optional[str]=None,)->typing.Type[google.cloud.bigquery_storage_v1.services.big_query_write.transports.base.BigQueryWriteTransport]

google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteAsyncClient.get_write_stream

get_write_stream(request:typing.Optional[typing.Union[google.cloud.bigquery_storage_v1.types.storage.GetWriteStreamRequest,dict]]=None,*,name:typing.Optional[str]=None,retry:typing.Optional[typing.Union[google.api_core.retry.retry_unary_async.AsyncRetry,google.api_core.gapic_v1.method._MethodDefault,]]=_MethodDefault._DEFAULT_VALUE,timeout:typing.Union[float,object]=_MethodDefault._DEFAULT_VALUE,metadata:typing.Sequence[typing.Tuple[str,typing.Union[str,bytes]]]=())->google.cloud.bigquery_storage_v1.types.stream.WriteStream

google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteAsyncClient.parse_common_billing_account_path

parse_common_billing_account_path(path:str)->typing.Dict[str,str]

google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteAsyncClient.parse_common_folder_path

parse_common_folder_path(path:str)->typing.Dict[str,str]

google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteAsyncClient.parse_common_location_path

parse_common_location_path(path:str)->typing.Dict[str,str]

google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteAsyncClient.parse_common_organization_path

parse_common_organization_path(path:str)->typing.Dict[str,str]

google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteAsyncClient.parse_common_project_path

parse_common_project_path(path:str)->typing.Dict[str,str]

google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteAsyncClient.parse_table_path

parse_table_path(path:str)->typing.Dict[str,str]

google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteAsyncClient.parse_write_stream_path

parse_write_stream_path(path:str)->typing.Dict[str,str]

google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteAsyncClient.table_path

table_path(project:str,dataset:str,table:str)->str

google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteAsyncClient.write_stream_path

write_stream_path(project:str,dataset:str,table:str,stream:str)->str

google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteClient

BigQueryWriteClient(*,credentials:typing.Optional[google.auth.credentials.Credentials]=None,transport:typing.Optional[typing.Union[str,google.cloud.bigquery_storage_v1.services.big_query_write.transports.base.BigQueryWriteTransport,typing.Callable[[...],google.cloud.bigquery_storage_v1.services.big_query_write.transports.base.BigQueryWriteTransport,],]]=None,client_options:typing.Optional[typing.Union[google.api_core.client_options.ClientOptions,dict]]=None,client_info:google.api_core.gapic_v1.client_info.ClientInfo=google.api_core.gapic_v1.client_info.ClientInfo)

google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteClient.__exit__

__exit__(type,value,traceback)

google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteClient.append_rows

append_rows(requests:typing.Optional[typing.Iterator[google.cloud.bigquery_storage_v1.types.storage.AppendRowsRequest]]=None,*,retry:typing.Optional[typing.Union[google.api_core.retry.retry_unary.Retry,google.api_core.gapic_v1.method._MethodDefault,]]=_MethodDefault._DEFAULT_VALUE,timeout:typing.Union[float,object]=_MethodDefault._DEFAULT_VALUE,metadata:typing.Sequence[typing.Tuple[str,typing.Union[str,bytes]]]=())->typing.Iterable[google.cloud.bigquery_storage_v1.types.storage.AppendRowsResponse]

google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteClient.batch_commit_write_streams

batch_commit_write_streams(request:typing.Optional[typing.Union[google.cloud.bigquery_storage_v1.types.storage.BatchCommitWriteStreamsRequest,dict,]]=None,*,parent:typing.Optional[str]=None,retry:typing.Optional[typing.Union[google.api_core.retry.retry_unary.Retry,google.api_core.gapic_v1.method._MethodDefault,]]=_MethodDefault._DEFAULT_VALUE,timeout:typing.Union[float,object]=_MethodDefault._DEFAULT_VALUE,metadata:typing.Sequence[typing.Tuple[str,typing.Union[str,bytes]]]=())->google.cloud.bigquery_storage_v1.types.storage.BatchCommitWriteStreamsResponse

Atomically commits a group ofPENDING streams that belong tothe sameparent table.

See more:google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteClient.batch_commit_write_streams

google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteClient.common_billing_account_path

common_billing_account_path(billing_account:str)->str

google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteClient.common_folder_path

common_folder_path(folder:str)->str

google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteClient.common_location_path

common_location_path(project:str,location:str)->str

google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteClient.common_organization_path

common_organization_path(organization:str)->str

google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteClient.common_project_path

common_project_path(project:str)->str

google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteClient.create_write_stream

create_write_stream(request:typing.Optional[typing.Union[google.cloud.bigquery_storage_v1.types.storage.CreateWriteStreamRequest,dict,]]=None,*,parent:typing.Optional[str]=None,write_stream:typing.Optional[google.cloud.bigquery_storage_v1.types.stream.WriteStream]=None,retry:typing.Optional[typing.Union[google.api_core.retry.retry_unary.Retry,google.api_core.gapic_v1.method._MethodDefault,]]=_MethodDefault._DEFAULT_VALUE,timeout:typing.Union[float,object]=_MethodDefault._DEFAULT_VALUE,metadata:typing.Sequence[typing.Tuple[str,typing.Union[str,bytes]]]=())->google.cloud.bigquery_storage_v1.types.stream.WriteStream

google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteClient.finalize_write_stream

finalize_write_stream(request:typing.Optional[typing.Union[google.cloud.bigquery_storage_v1.types.storage.FinalizeWriteStreamRequest,dict,]]=None,*,name:typing.Optional[str]=None,retry:typing.Optional[typing.Union[google.api_core.retry.retry_unary.Retry,google.api_core.gapic_v1.method._MethodDefault,]]=_MethodDefault._DEFAULT_VALUE,timeout:typing.Union[float,object]=_MethodDefault._DEFAULT_VALUE,metadata:typing.Sequence[typing.Tuple[str,typing.Union[str,bytes]]]=())->google.cloud.bigquery_storage_v1.types.storage.FinalizeWriteStreamResponse

Finalize a write stream so that no new data can be appended tothe stream.

See more:google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteClient.finalize_write_stream

google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteClient.flush_rows

flush_rows(request:typing.Optional[typing.Union[google.cloud.bigquery_storage_v1.types.storage.FlushRowsRequest,dict]]=None,*,write_stream:typing.Optional[str]=None,retry:typing.Optional[typing.Union[google.api_core.retry.retry_unary.Retry,google.api_core.gapic_v1.method._MethodDefault,]]=_MethodDefault._DEFAULT_VALUE,timeout:typing.Union[float,object]=_MethodDefault._DEFAULT_VALUE,metadata:typing.Sequence[typing.Tuple[str,typing.Union[str,bytes]]]=())->google.cloud.bigquery_storage_v1.types.storage.FlushRowsResponse

google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteClient.from_service_account_file

from_service_account_file(filename:str,*args,**kwargs)

Creates an instance of this client using the provided credentials file.

See more:google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteClient.from_service_account_file

google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteClient.from_service_account_info

from_service_account_info(info:dict,*args,**kwargs)

Creates an instance of this client using the provided credentials info.

See more:google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteClient.from_service_account_info

google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteClient.from_service_account_json

from_service_account_json(filename:str,*args,**kwargs)

Creates an instance of this client using the provided credentials file.

See more:google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteClient.from_service_account_json

google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteClient.get_mtls_endpoint_and_cert_source

get_mtls_endpoint_and_cert_source(client_options:typing.Optional[google.api_core.client_options.ClientOptions]=None,)

google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteClient.get_write_stream

get_write_stream(request:typing.Optional[typing.Union[google.cloud.bigquery_storage_v1.types.storage.GetWriteStreamRequest,dict]]=None,*,name:typing.Optional[str]=None,retry:typing.Optional[typing.Union[google.api_core.retry.retry_unary.Retry,google.api_core.gapic_v1.method._MethodDefault,]]=_MethodDefault._DEFAULT_VALUE,timeout:typing.Union[float,object]=_MethodDefault._DEFAULT_VALUE,metadata:typing.Sequence[typing.Tuple[str,typing.Union[str,bytes]]]=())->google.cloud.bigquery_storage_v1.types.stream.WriteStream

google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteClient.parse_common_billing_account_path

parse_common_billing_account_path(path:str)->typing.Dict[str,str]

google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteClient.parse_common_folder_path

parse_common_folder_path(path:str)->typing.Dict[str,str]

google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteClient.parse_common_location_path

parse_common_location_path(path:str)->typing.Dict[str,str]

google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteClient.parse_common_organization_path

parse_common_organization_path(path:str)->typing.Dict[str,str]

google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteClient.parse_common_project_path

parse_common_project_path(path:str)->typing.Dict[str,str]

google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteClient.parse_table_path

parse_table_path(path:str)->typing.Dict[str,str]

google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteClient.parse_write_stream_path

parse_write_stream_path(path:str)->typing.Dict[str,str]

google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteClient.table_path

table_path(project:str,dataset:str,table:str)->str

google.cloud.bigquery_storage_v1.services.big_query_write.BigQueryWriteClient.write_stream_path

write_stream_path(project:str,dataset:str,table:str,stream:str)->str

google.cloud.bigquery_storage_v1alpha.services.metastore_partition_service.MetastorePartitionServiceAsyncClient

MetastorePartitionServiceAsyncClient(*,credentials:typing.Optional[google.auth.credentials.Credentials]=None,transport:typing.Optional[typing.Union[str,google.cloud.bigquery_storage_v1alpha.services.metastore_partition_service.transports.base.MetastorePartitionServiceTransport,typing.Callable[[...],google.cloud.bigquery_storage_v1alpha.services.metastore_partition_service.transports.base.MetastorePartitionServiceTransport,],]]="grpc_asyncio",client_options:typing.Optional[google.api_core.client_options.ClientOptions]=None,client_info:google.api_core.gapic_v1.client_info.ClientInfo=google.api_core.gapic_v1.client_info.ClientInfo)

google.cloud.bigquery_storage_v1alpha.services.metastore_partition_service.MetastorePartitionServiceAsyncClient.batch_create_metastore_partitions

batch_create_metastore_partitions(request:typing.Optional[typing.Union[google.cloud.bigquery_storage_v1alpha.types.metastore_partition.BatchCreateMetastorePartitionsRequest,dict,]]=None,*,retry:typing.Optional[typing.Union[google.api_core.retry.retry_unary_async.AsyncRetry,google.api_core.gapic_v1.method._MethodDefault,]]=_MethodDefault._DEFAULT_VALUE,timeout:typing.Union[float,object]=_MethodDefault._DEFAULT_VALUE,metadata:typing.Sequence[typing.Tuple[str,typing.Union[str,bytes]]]=())->(google.cloud.bigquery_storage_v1alpha.types.metastore_partition.BatchCreateMetastorePartitionsResponse)

google.cloud.bigquery_storage_v1alpha.services.metastore_partition_service.MetastorePartitionServiceAsyncClient.batch_delete_metastore_partitions

batch_delete_metastore_partitions(request:typing.Optional[typing.Union[google.cloud.bigquery_storage_v1alpha.types.metastore_partition.BatchDeleteMetastorePartitionsRequest,dict,]]=None,*,retry:typing.Optional[typing.Union[google.api_core.retry.retry_unary_async.AsyncRetry,google.api_core.gapic_v1.method._MethodDefault,]]=_MethodDefault._DEFAULT_VALUE,timeout:typing.Union[float,object]=_MethodDefault._DEFAULT_VALUE,metadata:typing.Sequence[typing.Tuple[str,typing.Union[str,bytes]]]=())->None

google.cloud.bigquery_storage_v1alpha.services.metastore_partition_service.MetastorePartitionServiceAsyncClient.batch_update_metastore_partitions

batch_update_metastore_partitions(request:typing.Optional[typing.Union[google.cloud.bigquery_storage_v1alpha.types.metastore_partition.BatchUpdateMetastorePartitionsRequest,dict,]]=None,*,retry:typing.Optional[typing.Union[google.api_core.retry.retry_unary_async.AsyncRetry,google.api_core.gapic_v1.method._MethodDefault,]]=_MethodDefault._DEFAULT_VALUE,timeout:typing.Union[float,object]=_MethodDefault._DEFAULT_VALUE,metadata:typing.Sequence[typing.Tuple[str,typing.Union[str,bytes]]]=())->(google.cloud.bigquery_storage_v1alpha.types.metastore_partition.BatchUpdateMetastorePartitionsResponse)

google.cloud.bigquery_storage_v1alpha.services.metastore_partition_service.MetastorePartitionServiceAsyncClient.common_billing_account_path

common_billing_account_path(billing_account:str)->str

google.cloud.bigquery_storage_v1alpha.services.metastore_partition_service.MetastorePartitionServiceAsyncClient.common_folder_path

common_folder_path(folder:str)->str

google.cloud.bigquery_storage_v1alpha.services.metastore_partition_service.MetastorePartitionServiceAsyncClient.common_location_path

common_location_path(project:str,location:str)->str

google.cloud.bigquery_storage_v1alpha.services.metastore_partition_service.MetastorePartitionServiceAsyncClient.common_organization_path

common_organization_path(organization:str)->str

google.cloud.bigquery_storage_v1alpha.services.metastore_partition_service.MetastorePartitionServiceAsyncClient.common_project_path

common_project_path(project:str)->str

google.cloud.bigquery_storage_v1alpha.services.metastore_partition_service.MetastorePartitionServiceAsyncClient.from_service_account_file

from_service_account_file(filename:str,*args,**kwargs)

google.cloud.bigquery_storage_v1alpha.services.metastore_partition_service.MetastorePartitionServiceAsyncClient.from_service_account_info

from_service_account_info(info:dict,*args,**kwargs)

google.cloud.bigquery_storage_v1alpha.services.metastore_partition_service.MetastorePartitionServiceAsyncClient.from_service_account_json

from_service_account_json(filename:str,*args,**kwargs)

google.cloud.bigquery_storage_v1alpha.services.metastore_partition_service.MetastorePartitionServiceAsyncClient.get_mtls_endpoint_and_cert_source

get_mtls_endpoint_and_cert_source(client_options:typing.Optional[google.api_core.client_options.ClientOptions]=None,)

google.cloud.bigquery_storage_v1alpha.services.metastore_partition_service.MetastorePartitionServiceAsyncClient.get_transport_class

get_transport_class(label:typing.Optional[str]=None,)->typing.Type[google.cloud.bigquery_storage_v1alpha.services.metastore_partition_service.transports.base.MetastorePartitionServiceTransport]

google.cloud.bigquery_storage_v1alpha.services.metastore_partition_service.MetastorePartitionServiceAsyncClient.list_metastore_partitions

list_metastore_partitions(request:typing.Optional[typing.Union[google.cloud.bigquery_storage_v1alpha.types.metastore_partition.ListMetastorePartitionsRequest,dict,]]=None,*,parent:typing.Optional[str]=None,retry:typing.Optional[typing.Union[google.api_core.retry.retry_unary_async.AsyncRetry,google.api_core.gapic_v1.method._MethodDefault,]]=_MethodDefault._DEFAULT_VALUE,timeout:typing.Union[float,object]=_MethodDefault._DEFAULT_VALUE,metadata:typing.Sequence[typing.Tuple[str,typing.Union[str,bytes]]]=())->(google.cloud.bigquery_storage_v1alpha.types.metastore_partition.ListMetastorePartitionsResponse)

google.cloud.bigquery_storage_v1alpha.services.metastore_partition_service.MetastorePartitionServiceAsyncClient.parse_common_billing_account_path

parse_common_billing_account_path(path:str)->typing.Dict[str,str]

google.cloud.bigquery_storage_v1alpha.services.metastore_partition_service.MetastorePartitionServiceAsyncClient.parse_common_folder_path

parse_common_folder_path(path:str)->typing.Dict[str,str]

google.cloud.bigquery_storage_v1alpha.services.metastore_partition_service.MetastorePartitionServiceAsyncClient.parse_common_location_path

parse_common_location_path(path:str)->typing.Dict[str,str]

google.cloud.bigquery_storage_v1alpha.services.metastore_partition_service.MetastorePartitionServiceAsyncClient.parse_common_organization_path

parse_common_organization_path(path:str)->typing.Dict[str,str]

google.cloud.bigquery_storage_v1alpha.services.metastore_partition_service.MetastorePartitionServiceAsyncClient.parse_common_project_path

parse_common_project_path(path:str)->typing.Dict[str,str]

google.cloud.bigquery_storage_v1alpha.services.metastore_partition_service.MetastorePartitionServiceAsyncClient.parse_read_stream_path

parse_read_stream_path(path:str)->typing.Dict[str,str]

google.cloud.bigquery_storage_v1alpha.services.metastore_partition_service.MetastorePartitionServiceAsyncClient.parse_table_path

parse_table_path(path:str)->typing.Dict[str,str]

google.cloud.bigquery_storage_v1alpha.services.metastore_partition_service.MetastorePartitionServiceAsyncClient.read_stream_path

read_stream_path(project:str,location:str,session:str,stream:str)->str

google.cloud.bigquery_storage_v1alpha.services.metastore_partition_service.MetastorePartitionServiceAsyncClient.stream_metastore_partitions

stream_metastore_partitions(requests:typing.Optional[typing.AsyncIterator[google.cloud.bigquery_storage_v1alpha.types.metastore_partition.StreamMetastorePartitionsRequest]]=None,*,retry:typing.Optional[typing.Union[google.api_core.retry.retry_unary_async.AsyncRetry,google.api_core.gapic_v1.method._MethodDefault,]]=_MethodDefault._DEFAULT_VALUE,timeout:typing.Union[float,object]=_MethodDefault._DEFAULT_VALUE,metadata:typing.Sequence[typing.Tuple[str,typing.Union[str,bytes]]]=())->typing.Awaitable[typing.AsyncIterable[google.cloud.bigquery_storage_v1alpha.types.metastore_partition.StreamMetastorePartitionsResponse]]

This is a bi-di streaming rpc method that allows theclient to send a stream of partitions and commit all ofthem atomically at the end.

See more:google.cloud.bigquery_storage_v1alpha.services.metastore_partition_service.MetastorePartitionServiceAsyncClient.stream_metastore_partitions

google.cloud.bigquery_storage_v1alpha.services.metastore_partition_service.MetastorePartitionServiceAsyncClient.table_path

table_path(project:str,dataset:str,table:str)->str

google.cloud.bigquery_storage_v1alpha.services.metastore_partition_service.MetastorePartitionServiceClient

MetastorePartitionServiceClient(*,credentials:typing.Optional[google.auth.credentials.Credentials]=None,transport:typing.Optional[typing.Union[str,google.cloud.bigquery_storage_v1alpha.services.metastore_partition_service.transports.base.MetastorePartitionServiceTransport,typing.Callable[[...],google.cloud.bigquery_storage_v1alpha.services.metastore_partition_service.transports.base.MetastorePartitionServiceTransport,],]]=None,client_options:typing.Optional[typing.Union[google.api_core.client_options.ClientOptions,dict]]=None,client_info:google.api_core.gapic_v1.client_info.ClientInfo=google.api_core.gapic_v1.client_info.ClientInfo)

google.cloud.bigquery_storage_v1alpha.services.metastore_partition_service.MetastorePartitionServiceClient.__exit__

__exit__(type,value,traceback)

google.cloud.bigquery_storage_v1alpha.services.metastore_partition_service.MetastorePartitionServiceClient.batch_create_metastore_partitions

batch_create_metastore_partitions(request:typing.Optional[typing.Union[google.cloud.bigquery_storage_v1alpha.types.metastore_partition.BatchCreateMetastorePartitionsRequest,dict,]]=None,*,retry:typing.Optional[typing.Union[google.api_core.retry.retry_unary.Retry,google.api_core.gapic_v1.method._MethodDefault,]]=_MethodDefault._DEFAULT_VALUE,timeout:typing.Union[float,object]=_MethodDefault._DEFAULT_VALUE,metadata:typing.Sequence[typing.Tuple[str,typing.Union[str,bytes]]]=())->(google.cloud.bigquery_storage_v1alpha.types.metastore_partition.BatchCreateMetastorePartitionsResponse)

google.cloud.bigquery_storage_v1alpha.services.metastore_partition_service.MetastorePartitionServiceClient.batch_delete_metastore_partitions

batch_delete_metastore_partitions(request:typing.Optional[typing.Union[google.cloud.bigquery_storage_v1alpha.types.metastore_partition.BatchDeleteMetastorePartitionsRequest,dict,]]=None,*,retry:typing.Optional[typing.Union[google.api_core.retry.retry_unary.Retry,google.api_core.gapic_v1.method._MethodDefault,]]=_MethodDefault._DEFAULT_VALUE,timeout:typing.Union[float,object]=_MethodDefault._DEFAULT_VALUE,metadata:typing.Sequence[typing.Tuple[str,typing.Union[str,bytes]]]=())->None

google.cloud.bigquery_storage_v1alpha.services.metastore_partition_service.MetastorePartitionServiceClient.batch_update_metastore_partitions

batch_update_metastore_partitions(request:typing.Optional[typing.Union[google.cloud.bigquery_storage_v1alpha.types.metastore_partition.BatchUpdateMetastorePartitionsRequest,dict,]]=None,*,retry:typing.Optional[typing.Union[google.api_core.retry.retry_unary.Retry,google.api_core.gapic_v1.method._MethodDefault,]]=_MethodDefault._DEFAULT_VALUE,timeout:typing.Union[float,object]=_MethodDefault._DEFAULT_VALUE,metadata:typing.Sequence[typing.Tuple[str,typing.Union[str,bytes]]]=())->(google.cloud.bigquery_storage_v1alpha.types.metastore_partition.BatchUpdateMetastorePartitionsResponse)

google.cloud.bigquery_storage_v1alpha.services.metastore_partition_service.MetastorePartitionServiceClient.common_billing_account_path

common_billing_account_path(billing_account:str)->str

google.cloud.bigquery_storage_v1alpha.services.metastore_partition_service.MetastorePartitionServiceClient.common_folder_path

common_folder_path(folder:str)->str

google.cloud.bigquery_storage_v1alpha.services.metastore_partition_service.MetastorePartitionServiceClient.common_location_path

common_location_path(project:str,location:str)->str

google.cloud.bigquery_storage_v1alpha.services.metastore_partition_service.MetastorePartitionServiceClient.common_organization_path

common_organization_path(organization:str)->str

google.cloud.bigquery_storage_v1alpha.services.metastore_partition_service.MetastorePartitionServiceClient.common_project_path

common_project_path(project:str)->str

google.cloud.bigquery_storage_v1alpha.services.metastore_partition_service.MetastorePartitionServiceClient.from_service_account_file

from_service_account_file(filename:str,*args,**kwargs)

google.cloud.bigquery_storage_v1alpha.services.metastore_partition_service.MetastorePartitionServiceClient.from_service_account_info

from_service_account_info(info:dict,*args,**kwargs)

google.cloud.bigquery_storage_v1alpha.services.metastore_partition_service.MetastorePartitionServiceClient.from_service_account_json

from_service_account_json(filename:str,*args,**kwargs)

google.cloud.bigquery_storage_v1alpha.services.metastore_partition_service.MetastorePartitionServiceClient.get_mtls_endpoint_and_cert_source

get_mtls_endpoint_and_cert_source(client_options:typing.Optional[google.api_core.client_options.ClientOptions]=None,)

google.cloud.bigquery_storage_v1alpha.services.metastore_partition_service.MetastorePartitionServiceClient.list_metastore_partitions

list_metastore_partitions(request:typing.Optional[typing.Union[google.cloud.bigquery_storage_v1alpha.types.metastore_partition.ListMetastorePartitionsRequest,dict,]]=None,*,parent:typing.Optional[str]=None,retry:typing.Optional[typing.Union[google.api_core.retry.retry_unary.Retry,google.api_core.gapic_v1.method._MethodDefault,]]=_MethodDefault._DEFAULT_VALUE,timeout:typing.Union[float,object]=_MethodDefault._DEFAULT_VALUE,metadata:typing.Sequence[typing.Tuple[str,typing.Union[str,bytes]]]=())->(google.cloud.bigquery_storage_v1alpha.types.metastore_partition.ListMetastorePartitionsResponse)

google.cloud.bigquery_storage_v1alpha.services.metastore_partition_service.MetastorePartitionServiceClient.parse_common_billing_account_path

parse_common_billing_account_path(path:str)->typing.Dict[str,str]

google.cloud.bigquery_storage_v1alpha.services.metastore_partition_service.MetastorePartitionServiceClient.parse_common_folder_path

parse_common_folder_path(path:str)->typing.Dict[str,str]

google.cloud.bigquery_storage_v1alpha.services.metastore_partition_service.MetastorePartitionServiceClient.parse_common_location_path

parse_common_location_path(path:str)->typing.Dict[str,str]

google.cloud.bigquery_storage_v1alpha.services.metastore_partition_service.MetastorePartitionServiceClient.parse_common_organization_path

parse_common_organization_path(path:str)->typing.Dict[str,str]

google.cloud.bigquery_storage_v1alpha.services.metastore_partition_service.MetastorePartitionServiceClient.parse_common_project_path

parse_common_project_path(path:str)->typing.Dict[str,str]

google.cloud.bigquery_storage_v1alpha.services.metastore_partition_service.MetastorePartitionServiceClient.parse_read_stream_path

parse_read_stream_path(path:str)->typing.Dict[str,str]

google.cloud.bigquery_storage_v1alpha.services.metastore_partition_service.MetastorePartitionServiceClient.parse_table_path

parse_table_path(path:str)->typing.Dict[str,str]

google.cloud.bigquery_storage_v1alpha.services.metastore_partition_service.MetastorePartitionServiceClient.read_stream_path

read_stream_path(project:str,location:str,session:str,stream:str)->str

google.cloud.bigquery_storage_v1alpha.services.metastore_partition_service.MetastorePartitionServiceClient.stream_metastore_partitions

stream_metastore_partitions(requests:typing.Optional[typing.Iterator[google.cloud.bigquery_storage_v1alpha.types.metastore_partition.StreamMetastorePartitionsRequest]]=None,*,retry:typing.Optional[typing.Union[google.api_core.retry.retry_unary.Retry,google.api_core.gapic_v1.method._MethodDefault,]]=_MethodDefault._DEFAULT_VALUE,timeout:typing.Union[float,object]=_MethodDefault._DEFAULT_VALUE,metadata:typing.Sequence[typing.Tuple[str,typing.Union[str,bytes]]]=())->typing.Iterable[google.cloud.bigquery_storage_v1alpha.types.metastore_partition.StreamMetastorePartitionsResponse]

This is a bi-di streaming rpc method that allows theclient to send a stream of partitions and commit all ofthem atomically at the end.

See more:google.cloud.bigquery_storage_v1alpha.services.metastore_partition_service.MetastorePartitionServiceClient.stream_metastore_partitions

google.cloud.bigquery_storage_v1alpha.services.metastore_partition_service.MetastorePartitionServiceClient.table_path

table_path(project:str,dataset:str,table:str)->str

google.cloud.bigquery_storage_v1beta.services.metastore_partition_service.MetastorePartitionServiceAsyncClient

MetastorePartitionServiceAsyncClient(*,credentials:typing.Optional[google.auth.credentials.Credentials]=None,transport:typing.Optional[typing.Union[str,google.cloud.bigquery_storage_v1beta.services.metastore_partition_service.transports.base.MetastorePartitionServiceTransport,typing.Callable[[...],google.cloud.bigquery_storage_v1beta.services.metastore_partition_service.transports.base.MetastorePartitionServiceTransport,],]]="grpc_asyncio",client_options:typing.Optional[google.api_core.client_options.ClientOptions]=None,client_info:google.api_core.gapic_v1.client_info.ClientInfo=google.api_core.gapic_v1.client_info.ClientInfo)

google.cloud.bigquery_storage_v1beta.services.metastore_partition_service.MetastorePartitionServiceAsyncClient.batch_create_metastore_partitions

batch_create_metastore_partitions(request:typing.Optional[typing.Union[google.cloud.bigquery_storage_v1beta.types.metastore_partition.BatchCreateMetastorePartitionsRequest,dict,]]=None,*,retry:typing.Optional[typing.Union[google.api_core.retry.retry_unary_async.AsyncRetry,google.api_core.gapic_v1.method._MethodDefault,]]=_MethodDefault._DEFAULT_VALUE,timeout:typing.Union[float,object]=_MethodDefault._DEFAULT_VALUE,metadata:typing.Sequence[typing.Tuple[str,typing.Union[str,bytes]]]=())->(google.cloud.bigquery_storage_v1beta.types.metastore_partition.BatchCreateMetastorePartitionsResponse)

google.cloud.bigquery_storage_v1beta.services.metastore_partition_service.MetastorePartitionServiceAsyncClient.batch_delete_metastore_partitions

batch_delete_metastore_partitions(request:typing.Optional[typing.Union[google.cloud.bigquery_storage_v1beta.types.metastore_partition.BatchDeleteMetastorePartitionsRequest,dict,]]=None,*,retry:typing.Optional[typing.Union[google.api_core.retry.retry_unary_async.AsyncRetry,google.api_core.gapic_v1.method._MethodDefault,]]=_MethodDefault._DEFAULT_VALUE,timeout:typing.Union[float,object]=_MethodDefault._DEFAULT_VALUE,metadata:typing.Sequence[typing.Tuple[str,typing.Union[str,bytes]]]=())->None

google.cloud.bigquery_storage_v1beta.services.metastore_partition_service.MetastorePartitionServiceAsyncClient.batch_update_metastore_partitions

batch_update_metastore_partitions(request:typing.Optional[typing.Union[google.cloud.bigquery_storage_v1beta.types.metastore_partition.BatchUpdateMetastorePartitionsRequest,dict,]]=None,*,retry:typing.Optional[typing.Union[google.api_core.retry.retry_unary_async.AsyncRetry,google.api_core.gapic_v1.method._MethodDefault,]]=_MethodDefault._DEFAULT_VALUE,timeout:typing.Union[float,object]=_MethodDefault._DEFAULT_VALUE,metadata:typing.Sequence[typing.Tuple[str,typing.Union[str,bytes]]]=())->(google.cloud.bigquery_storage_v1beta.types.metastore_partition.BatchUpdateMetastorePartitionsResponse)

google.cloud.bigquery_storage_v1beta.services.metastore_partition_service.MetastorePartitionServiceAsyncClient.common_billing_account_path

common_billing_account_path(billing_account:str)->str

google.cloud.bigquery_storage_v1beta.services.metastore_partition_service.MetastorePartitionServiceAsyncClient.common_folder_path

common_folder_path(folder:str)->str

google.cloud.bigquery_storage_v1beta.services.metastore_partition_service.MetastorePartitionServiceAsyncClient.common_location_path

common_location_path(project:str,location:str)->str

google.cloud.bigquery_storage_v1beta.services.metastore_partition_service.MetastorePartitionServiceAsyncClient.common_organization_path

common_organization_path(organization:str)->str

google.cloud.bigquery_storage_v1beta.services.metastore_partition_service.MetastorePartitionServiceAsyncClient.common_project_path

common_project_path(project:str)->str

google.cloud.bigquery_storage_v1beta.services.metastore_partition_service.MetastorePartitionServiceAsyncClient.from_service_account_file

from_service_account_file(filename:str,*args,**kwargs)

google.cloud.bigquery_storage_v1beta.services.metastore_partition_service.MetastorePartitionServiceAsyncClient.from_service_account_info

from_service_account_info(info:dict,*args,**kwargs)

google.cloud.bigquery_storage_v1beta.services.metastore_partition_service.MetastorePartitionServiceAsyncClient.from_service_account_json

from_service_account_json(filename:str,*args,**kwargs)

google.cloud.bigquery_storage_v1beta.services.metastore_partition_service.MetastorePartitionServiceAsyncClient.get_mtls_endpoint_and_cert_source

get_mtls_endpoint_and_cert_source(client_options:typing.Optional[google.api_core.client_options.ClientOptions]=None,)

google.cloud.bigquery_storage_v1beta.services.metastore_partition_service.MetastorePartitionServiceAsyncClient.get_transport_class

get_transport_class(label:typing.Optional[str]=None,)->typing.Type[google.cloud.bigquery_storage_v1beta.services.metastore_partition_service.transports.base.MetastorePartitionServiceTransport]

google.cloud.bigquery_storage_v1beta.services.metastore_partition_service.MetastorePartitionServiceAsyncClient.list_metastore_partitions

list_metastore_partitions(request:typing.Optional[typing.Union[google.cloud.bigquery_storage_v1beta.types.metastore_partition.ListMetastorePartitionsRequest,dict,]]=None,*,parent:typing.Optional[str]=None,retry:typing.Optional[typing.Union[google.api_core.retry.retry_unary_async.AsyncRetry,google.api_core.gapic_v1.method._MethodDefault,]]=_MethodDefault._DEFAULT_VALUE,timeout:typing.Union[float,object]=_MethodDefault._DEFAULT_VALUE,metadata:typing.Sequence[typing.Tuple[str,typing.Union[str,bytes]]]=())->(google.cloud.bigquery_storage_v1beta.types.metastore_partition.ListMetastorePartitionsResponse)

google.cloud.bigquery_storage_v1beta.services.metastore_partition_service.MetastorePartitionServiceAsyncClient.parse_common_billing_account_path

parse_common_billing_account_path(path:str)->typing.Dict[str,str]

google.cloud.bigquery_storage_v1beta.services.metastore_partition_service.MetastorePartitionServiceAsyncClient.parse_common_folder_path

parse_common_folder_path(path:str)->typing.Dict[str,str]

google.cloud.bigquery_storage_v1beta.services.metastore_partition_service.MetastorePartitionServiceAsyncClient.parse_common_location_path

parse_common_location_path(path:str)->typing.Dict[str,str]

google.cloud.bigquery_storage_v1beta.services.metastore_partition_service.MetastorePartitionServiceAsyncClient.parse_common_organization_path

parse_common_organization_path(path:str)->typing.Dict[str,str]

google.cloud.bigquery_storage_v1beta.services.metastore_partition_service.MetastorePartitionServiceAsyncClient.parse_common_project_path

parse_common_project_path(path:str)->typing.Dict[str,str]

google.cloud.bigquery_storage_v1beta.services.metastore_partition_service.MetastorePartitionServiceAsyncClient.parse_read_stream_path

parse_read_stream_path(path:str)->typing.Dict[str,str]

google.cloud.bigquery_storage_v1beta.services.metastore_partition_service.MetastorePartitionServiceAsyncClient.parse_table_path

parse_table_path(path:str)->typing.Dict[str,str]

google.cloud.bigquery_storage_v1beta.services.metastore_partition_service.MetastorePartitionServiceAsyncClient.read_stream_path

read_stream_path(project:str,location:str,session:str,stream:str)->str

google.cloud.bigquery_storage_v1beta.services.metastore_partition_service.MetastorePartitionServiceAsyncClient.stream_metastore_partitions

stream_metastore_partitions(requests:typing.Optional[typing.AsyncIterator[google.cloud.bigquery_storage_v1beta.types.metastore_partition.StreamMetastorePartitionsRequest]]=None,*,retry:typing.Optional[typing.Union[google.api_core.retry.retry_unary_async.AsyncRetry,google.api_core.gapic_v1.method._MethodDefault,]]=_MethodDefault._DEFAULT_VALUE,timeout:typing.Union[float,object]=_MethodDefault._DEFAULT_VALUE,metadata:typing.Sequence[typing.Tuple[str,typing.Union[str,bytes]]]=())->typing.Awaitable[typing.AsyncIterable[google.cloud.bigquery_storage_v1beta.types.metastore_partition.StreamMetastorePartitionsResponse]]

This is a bi-di streaming rpc method that allows theclient to send a stream of partitions and commit all ofthem atomically at the end.

See more:google.cloud.bigquery_storage_v1beta.services.metastore_partition_service.MetastorePartitionServiceAsyncClient.stream_metastore_partitions

google.cloud.bigquery_storage_v1beta.services.metastore_partition_service.MetastorePartitionServiceAsyncClient.table_path

table_path(project:str,dataset:str,table:str)->str

google.cloud.bigquery_storage_v1beta.services.metastore_partition_service.MetastorePartitionServiceClient

MetastorePartitionServiceClient(*,credentials:typing.Optional[google.auth.credentials.Credentials]=None,transport:typing.Optional[typing.Union[str,google.cloud.bigquery_storage_v1beta.services.metastore_partition_service.transports.base.MetastorePartitionServiceTransport,typing.Callable[[...],google.cloud.bigquery_storage_v1beta.services.metastore_partition_service.transports.base.MetastorePartitionServiceTransport,],]]=None,client_options:typing.Optional[typing.Union[google.api_core.client_options.ClientOptions,dict]]=None,client_info:google.api_core.gapic_v1.client_info.ClientInfo=google.api_core.gapic_v1.client_info.ClientInfo)

google.cloud.bigquery_storage_v1beta.services.metastore_partition_service.MetastorePartitionServiceClient.__exit__

__exit__(type,value,traceback)

google.cloud.bigquery_storage_v1beta.services.metastore_partition_service.MetastorePartitionServiceClient.batch_create_metastore_partitions

batch_create_metastore_partitions(request:typing.Optional[typing.Union[google.cloud.bigquery_storage_v1beta.types.metastore_partition.BatchCreateMetastorePartitionsRequest,dict,]]=None,*,retry:typing.Optional[typing.Union[google.api_core.retry.retry_unary.Retry,google.api_core.gapic_v1.method._MethodDefault,]]=_MethodDefault._DEFAULT_VALUE,timeout:typing.Union[float,object]=_MethodDefault._DEFAULT_VALUE,metadata:typing.Sequence[typing.Tuple[str,typing.Union[str,bytes]]]=())->(google.cloud.bigquery_storage_v1beta.types.metastore_partition.BatchCreateMetastorePartitionsResponse)

google.cloud.bigquery_storage_v1beta.services.metastore_partition_service.MetastorePartitionServiceClient.batch_delete_metastore_partitions

batch_delete_metastore_partitions(request:typing.Optional[typing.Union[google.cloud.bigquery_storage_v1beta.types.metastore_partition.BatchDeleteMetastorePartitionsRequest,dict,]]=None,*,retry:typing.Optional[typing.Union[google.api_core.retry.retry_unary.Retry,google.api_core.gapic_v1.method._MethodDefault,]]=_MethodDefault._DEFAULT_VALUE,timeout:typing.Union[float,object]=_MethodDefault._DEFAULT_VALUE,metadata:typing.Sequence[typing.Tuple[str,typing.Union[str,bytes]]]=())->None

google.cloud.bigquery_storage_v1beta.services.metastore_partition_service.MetastorePartitionServiceClient.batch_update_metastore_partitions

batch_update_metastore_partitions(request:typing.Optional[typing.Union[google.cloud.bigquery_storage_v1beta.types.metastore_partition.BatchUpdateMetastorePartitionsRequest,dict,]]=None,*,retry:typing.Optional[typing.Union[google.api_core.retry.retry_unary.Retry,google.api_core.gapic_v1.method._MethodDefault,]]=_MethodDefault._DEFAULT_VALUE,timeout:typing.Union[float,object]=_MethodDefault._DEFAULT_VALUE,metadata:typing.Sequence[typing.Tuple[str,typing.Union[str,bytes]]]=())->(google.cloud.bigquery_storage_v1beta.types.metastore_partition.BatchUpdateMetastorePartitionsResponse)

google.cloud.bigquery_storage_v1beta.services.metastore_partition_service.MetastorePartitionServiceClient.common_billing_account_path

common_billing_account_path(billing_account:str)->str

google.cloud.bigquery_storage_v1beta.services.metastore_partition_service.MetastorePartitionServiceClient.common_folder_path

common_folder_path(folder:str)->str

google.cloud.bigquery_storage_v1beta.services.metastore_partition_service.MetastorePartitionServiceClient.common_location_path

common_location_path(project:str,location:str)->str

google.cloud.bigquery_storage_v1beta.services.metastore_partition_service.MetastorePartitionServiceClient.common_organization_path

common_organization_path(organization:str)->str

google.cloud.bigquery_storage_v1beta.services.metastore_partition_service.MetastorePartitionServiceClient.common_project_path

common_project_path(project:str)->str

google.cloud.bigquery_storage_v1beta.services.metastore_partition_service.MetastorePartitionServiceClient.from_service_account_file

from_service_account_file(filename:str,*args,**kwargs)

google.cloud.bigquery_storage_v1beta.services.metastore_partition_service.MetastorePartitionServiceClient.from_service_account_info

from_service_account_info(info:dict,*args,**kwargs)

google.cloud.bigquery_storage_v1beta.services.metastore_partition_service.MetastorePartitionServiceClient.from_service_account_json

from_service_account_json(filename:str,*args,**kwargs)

google.cloud.bigquery_storage_v1beta.services.metastore_partition_service.MetastorePartitionServiceClient.get_mtls_endpoint_and_cert_source

get_mtls_endpoint_and_cert_source(client_options:typing.Optional[google.api_core.client_options.ClientOptions]=None,)

google.cloud.bigquery_storage_v1beta.services.metastore_partition_service.MetastorePartitionServiceClient.list_metastore_partitions

list_metastore_partitions(request:typing.Optional[typing.Union[google.cloud.bigquery_storage_v1beta.types.metastore_partition.ListMetastorePartitionsRequest,dict,]]=None,*,parent:typing.Optional[str]=None,retry:typing.Optional[typing.Union[google.api_core.retry.retry_unary.Retry,google.api_core.gapic_v1.method._MethodDefault,]]=_MethodDefault._DEFAULT_VALUE,timeout:typing.Union[float,object]=_MethodDefault._DEFAULT_VALUE,metadata:typing.Sequence[typing.Tuple[str,typing.Union[str,bytes]]]=())->(google.cloud.bigquery_storage_v1beta.types.metastore_partition.ListMetastorePartitionsResponse)

google.cloud.bigquery_storage_v1beta.services.metastore_partition_service.MetastorePartitionServiceClient.parse_common_billing_account_path

parse_common_billing_account_path(path:str)->typing.Dict[str,str]

google.cloud.bigquery_storage_v1beta.services.metastore_partition_service.MetastorePartitionServiceClient.parse_common_folder_path

parse_common_folder_path(path:str)->typing.Dict[str,str]

google.cloud.bigquery_storage_v1beta.services.metastore_partition_service.MetastorePartitionServiceClient.parse_common_location_path

parse_common_location_path(path:str)->typing.Dict[str,str]

google.cloud.bigquery_storage_v1beta.services.metastore_partition_service.MetastorePartitionServiceClient.parse_common_organization_path

parse_common_organization_path(path:str)->typing.Dict[str,str]

google.cloud.bigquery_storage_v1beta.services.metastore_partition_service.MetastorePartitionServiceClient.parse_common_project_path

parse_common_project_path(path:str)->typing.Dict[str,str]

google.cloud.bigquery_storage_v1beta.services.metastore_partition_service.MetastorePartitionServiceClient.parse_read_stream_path

parse_read_stream_path(path:str)->typing.Dict[str,str]

google.cloud.bigquery_storage_v1beta.services.metastore_partition_service.MetastorePartitionServiceClient.parse_table_path

parse_table_path(path:str)->typing.Dict[str,str]

google.cloud.bigquery_storage_v1beta.services.metastore_partition_service.MetastorePartitionServiceClient.read_stream_path

read_stream_path(project:str,location:str,session:str,stream:str)->str

google.cloud.bigquery_storage_v1beta.services.metastore_partition_service.MetastorePartitionServiceClient.stream_metastore_partitions

stream_metastore_partitions(requests:typing.Optional[typing.Iterator[google.cloud.bigquery_storage_v1beta.types.metastore_partition.StreamMetastorePartitionsRequest]]=None,*,retry:typing.Optional[typing.Union[google.api_core.retry.retry_unary.Retry,google.api_core.gapic_v1.method._MethodDefault,]]=_MethodDefault._DEFAULT_VALUE,timeout:typing.Union[float,object]=_MethodDefault._DEFAULT_VALUE,metadata:typing.Sequence[typing.Tuple[str,typing.Union[str,bytes]]]=())->typing.Iterable[google.cloud.bigquery_storage_v1beta.types.metastore_partition.StreamMetastorePartitionsResponse]

This is a bi-di streaming rpc method that allows theclient to send a stream of partitions and commit all ofthem atomically at the end.

See more:google.cloud.bigquery_storage_v1beta.services.metastore_partition_service.MetastorePartitionServiceClient.stream_metastore_partitions

google.cloud.bigquery_storage_v1beta.services.metastore_partition_service.MetastorePartitionServiceClient.table_path

table_path(project:str,dataset:str,table:str)->str

google.cloud.bigquery_storage_v1beta2.client.BigQueryReadClient

BigQueryReadClient(**kwargs)

Instantiates the big query read client.

See more:google.cloud.bigquery_storage_v1beta2.client.BigQueryReadClient

google.cloud.bigquery_storage_v1beta2.client.BigQueryReadClient.__exit__

__exit__(type,value,traceback)

Releases underlying transport's resources.

See more:google.cloud.bigquery_storage_v1beta2.client.BigQueryReadClient.exit

google.cloud.bigquery_storage_v1beta2.client.BigQueryReadClient.common_billing_account_path

common_billing_account_path(billing_account:str)->str

google.cloud.bigquery_storage_v1beta2.client.BigQueryReadClient.common_folder_path

common_folder_path(folder:str)->str

google.cloud.bigquery_storage_v1beta2.client.BigQueryReadClient.common_location_path

common_location_path(project:str,location:str)->str

google.cloud.bigquery_storage_v1beta2.client.BigQueryReadClient.common_organization_path

common_organization_path(organization:str)->str

google.cloud.bigquery_storage_v1beta2.client.BigQueryReadClient.common_project_path

common_project_path(project:str)->str

google.cloud.bigquery_storage_v1beta2.client.BigQueryReadClient.create_read_session

create_read_session(request:typing.Optional[typing.Union[google.cloud.bigquery_storage_v1beta2.types.storage.CreateReadSessionRequest,dict,]]=None,*,parent:typing.Optional[str]=None,read_session:typing.Optional[google.cloud.bigquery_storage_v1beta2.types.stream.ReadSession]=None,max_stream_count:typing.Optional[int]=None,retry:typing.Optional[typing.Union[google.api_core.retry.retry_unary.Retry,google.api_core.gapic_v1.method._MethodDefault,]]=_MethodDefault._DEFAULT_VALUE,timeout:typing.Union[float,object]=_MethodDefault._DEFAULT_VALUE,metadata:typing.Sequence[typing.Tuple[str,typing.Union[str,bytes]]]=())->google.cloud.bigquery_storage_v1beta2.types.stream.ReadSession

google.cloud.bigquery_storage_v1beta2.client.BigQueryReadClient.from_service_account_file

from_service_account_file(filename:str,*args,**kwargs)

Creates an instance of this client using the provided credentials file.

See more:google.cloud.bigquery_storage_v1beta2.client.BigQueryReadClient.from_service_account_file

google.cloud.bigquery_storage_v1beta2.client.BigQueryReadClient.from_service_account_info

from_service_account_info(info:dict,*args,**kwargs)

Creates an instance of this client using the provided credentials info.

See more:google.cloud.bigquery_storage_v1beta2.client.BigQueryReadClient.from_service_account_info

google.cloud.bigquery_storage_v1beta2.client.BigQueryReadClient.from_service_account_json

from_service_account_json(filename:str,*args,**kwargs)

Creates an instance of this client using the provided credentials file.

See more:google.cloud.bigquery_storage_v1beta2.client.BigQueryReadClient.from_service_account_json

google.cloud.bigquery_storage_v1beta2.client.BigQueryReadClient.get_mtls_endpoint_and_cert_source

get_mtls_endpoint_and_cert_source(client_options:typing.Optional[google.api_core.client_options.ClientOptions]=None,)

google.cloud.bigquery_storage_v1beta2.client.BigQueryReadClient.parse_common_billing_account_path

parse_common_billing_account_path(path:str)->typing.Dict[str,str]

Parse a billing_account path into its component segments.

See more:google.cloud.bigquery_storage_v1beta2.client.BigQueryReadClient.parse_common_billing_account_path

google.cloud.bigquery_storage_v1beta2.client.BigQueryReadClient.parse_common_folder_path

parse_common_folder_path(path:str)->typing.Dict[str,str]

google.cloud.bigquery_storage_v1beta2.client.BigQueryReadClient.parse_common_location_path

parse_common_location_path(path:str)->typing.Dict[str,str]

google.cloud.bigquery_storage_v1beta2.client.BigQueryReadClient.parse_common_organization_path

parse_common_organization_path(path:str)->typing.Dict[str,str]

Parse a organization path into its component segments.

See more:google.cloud.bigquery_storage_v1beta2.client.BigQueryReadClient.parse_common_organization_path

google.cloud.bigquery_storage_v1beta2.client.BigQueryReadClient.parse_common_project_path

parse_common_project_path(path:str)->typing.Dict[str,str]

google.cloud.bigquery_storage_v1beta2.client.BigQueryReadClient.parse_read_session_path

parse_read_session_path(path:str)->typing.Dict[str,str]

Parses a read_session path into its component segments.

See more:google.cloud.bigquery_storage_v1beta2.client.BigQueryReadClient.parse_read_session_path

google.cloud.bigquery_storage_v1beta2.client.BigQueryReadClient.parse_read_stream_path

parse_read_stream_path(path:str)->typing.Dict[str,str]

Parses a read_stream path into its component segments.

See more:google.cloud.bigquery_storage_v1beta2.client.BigQueryReadClient.parse_read_stream_path

google.cloud.bigquery_storage_v1beta2.client.BigQueryReadClient.parse_table_path

parse_table_path(path:str)->typing.Dict[str,str]

Parses a table path into its component segments.

See more:google.cloud.bigquery_storage_v1beta2.client.BigQueryReadClient.parse_table_path

google.cloud.bigquery_storage_v1beta2.client.BigQueryReadClient.read_rows

read_rows(name,offset=0,retry=_MethodDefault._DEFAULT_VALUE,timeout=_MethodDefault._DEFAULT_VALUE,metadata=(),retry_delay_callback=None,)

Reads rows from the table in the format prescribed by the readsession.

See more:google.cloud.bigquery_storage_v1beta2.client.BigQueryReadClient.read_rows

google.cloud.bigquery_storage_v1beta2.client.BigQueryReadClient.read_session_path

read_session_path(project:str,location:str,session:str)->str

Returns a fully-qualified read_session string.

See more:google.cloud.bigquery_storage_v1beta2.client.BigQueryReadClient.read_session_path

google.cloud.bigquery_storage_v1beta2.client.BigQueryReadClient.read_stream_path

read_stream_path(project:str,location:str,session:str,stream:str)->str

Returns a fully-qualified read_stream string.

See more:google.cloud.bigquery_storage_v1beta2.client.BigQueryReadClient.read_stream_path

google.cloud.bigquery_storage_v1beta2.client.BigQueryReadClient.split_read_stream

split_read_stream(request:typing.Optional[typing.Union[google.cloud.bigquery_storage_v1beta2.types.storage.SplitReadStreamRequest,dict,]]=None,*,retry:typing.Optional[typing.Union[google.api_core.retry.retry_unary.Retry,google.api_core.gapic_v1.method._MethodDefault,]]=_MethodDefault._DEFAULT_VALUE,timeout:typing.Union[float,object]=_MethodDefault._DEFAULT_VALUE,metadata:typing.Sequence[typing.Tuple[str,typing.Union[str,bytes]]]=())->google.cloud.bigquery_storage_v1beta2.types.storage.SplitReadStreamResponse

Splits a givenReadStream into twoReadStream objects.

See more:google.cloud.bigquery_storage_v1beta2.client.BigQueryReadClient.split_read_stream

google.cloud.bigquery_storage_v1beta2.client.BigQueryReadClient.table_path

table_path(project:str,dataset:str,table:str)->str

google.cloud.bigquery_storage_v1beta2.client.BigQueryWriteClient

BigQueryWriteClient(**kwargs)

Instantiates the big query write client.

See more:google.cloud.bigquery_storage_v1beta2.client.BigQueryWriteClient

google.cloud.bigquery_storage_v1beta2.client.BigQueryWriteClient.__exit__

__exit__(type,value,traceback)

Releases underlying transport's resources.

See more:google.cloud.bigquery_storage_v1beta2.client.BigQueryWriteClient.exit

google.cloud.bigquery_storage_v1beta2.client.BigQueryWriteClient.append_rows

append_rows(requests:typing.Optional[typing.Iterator[google.cloud.bigquery_storage_v1beta2.types.storage.AppendRowsRequest]]=None,*,retry:typing.Optional[typing.Union[google.api_core.retry.retry_unary.Retry,google.api_core.gapic_v1.method._MethodDefault,]]=_MethodDefault._DEFAULT_VALUE,timeout:typing.Union[float,object]=_MethodDefault._DEFAULT_VALUE,metadata:typing.Sequence[typing.Tuple[str,typing.Union[str,bytes]]]=())->typing.Iterable[google.cloud.bigquery_storage_v1beta2.types.storage.AppendRowsResponse]

google.cloud.bigquery_storage_v1beta2.client.BigQueryWriteClient.batch_commit_write_streams

batch_commit_write_streams(request:typing.Optional[typing.Union[google.cloud.bigquery_storage_v1beta2.types.storage.BatchCommitWriteStreamsRequest,dict,]]=None,*,parent:typing.Optional[str]=None,retry:typing.Optional[typing.Union[google.api_core.retry.retry_unary.Retry,google.api_core.gapic_v1.method._MethodDefault,]]=_MethodDefault._DEFAULT_VALUE,timeout:typing.Union[float,object]=_MethodDefault._DEFAULT_VALUE,metadata:typing.Sequence[typing.Tuple[str,typing.Union[str,bytes]]]=())->(google.cloud.bigquery_storage_v1beta2.types.storage.BatchCommitWriteStreamsResponse)

Atomically commits a group ofPENDING streams that belong tothe sameparent table.

See more:google.cloud.bigquery_storage_v1beta2.client.BigQueryWriteClient.batch_commit_write_streams

google.cloud.bigquery_storage_v1beta2.client.BigQueryWriteClient.common_billing_account_path

common_billing_account_path(billing_account:str)->str

google.cloud.bigquery_storage_v1beta2.client.BigQueryWriteClient.common_folder_path

common_folder_path(folder:str)->str

google.cloud.bigquery_storage_v1beta2.client.BigQueryWriteClient.common_location_path

common_location_path(project:str,location:str)->str

google.cloud.bigquery_storage_v1beta2.client.BigQueryWriteClient.common_organization_path

common_organization_path(organization:str)->str

google.cloud.bigquery_storage_v1beta2.client.BigQueryWriteClient.common_project_path

common_project_path(project:str)->str

google.cloud.bigquery_storage_v1beta2.client.BigQueryWriteClient.create_write_stream

create_write_stream(request:typing.Optional[typing.Union[google.cloud.bigquery_storage_v1beta2.types.storage.CreateWriteStreamRequest,dict,]]=None,*,parent:typing.Optional[str]=None,write_stream:typing.Optional[google.cloud.bigquery_storage_v1beta2.types.stream.WriteStream]=None,retry:typing.Optional[typing.Union[google.api_core.retry.retry_unary.Retry,google.api_core.gapic_v1.method._MethodDefault,]]=_MethodDefault._DEFAULT_VALUE,timeout:typing.Union[float,object]=_MethodDefault._DEFAULT_VALUE,metadata:typing.Sequence[typing.Tuple[str,typing.Union[str,bytes]]]=())->google.cloud.bigquery_storage_v1beta2.types.stream.WriteStream

google.cloud.bigquery_storage_v1beta2.client.BigQueryWriteClient.finalize_write_stream

finalize_write_stream(request:typing.Optional[typing.Union[google.cloud.bigquery_storage_v1beta2.types.storage.FinalizeWriteStreamRequest,dict,]]=None,*,name:typing.Optional[str]=None,retry:typing.Optional[typing.Union[google.api_core.retry.retry_unary.Retry,google.api_core.gapic_v1.method._MethodDefault,]]=_MethodDefault._DEFAULT_VALUE,timeout:typing.Union[float,object]=_MethodDefault._DEFAULT_VALUE,metadata:typing.Sequence[typing.Tuple[str,typing.Union[str,bytes]]]=())->google.cloud.bigquery_storage_v1beta2.types.storage.FinalizeWriteStreamResponse

Finalize a write stream so that no new data can be appended tothe stream.

See more:google.cloud.bigquery_storage_v1beta2.client.BigQueryWriteClient.finalize_write_stream

google.cloud.bigquery_storage_v1beta2.client.BigQueryWriteClient.flush_rows

flush_rows(request:typing.Optional[typing.Union[google.cloud.bigquery_storage_v1beta2.types.storage.FlushRowsRequest,dict]]=None,*,write_stream:typing.Optional[str]=None,retry:typing.Optional[typing.Union[google.api_core.retry.retry_unary.Retry,google.api_core.gapic_v1.method._MethodDefault,]]=_MethodDefault._DEFAULT_VALUE,timeout:typing.Union[float,object]=_MethodDefault._DEFAULT_VALUE,metadata:typing.Sequence[typing.Tuple[str,typing.Union[str,bytes]]]=())->google.cloud.bigquery_storage_v1beta2.types.storage.FlushRowsResponse

google.cloud.bigquery_storage_v1beta2.client.BigQueryWriteClient.from_service_account_file

from_service_account_file(filename:str,*args,**kwargs)

Creates an instance of this client using the provided credentials file.

See more:google.cloud.bigquery_storage_v1beta2.client.BigQueryWriteClient.from_service_account_file

google.cloud.bigquery_storage_v1beta2.client.BigQueryWriteClient.from_service_account_info

from_service_account_info(info:dict,*args,**kwargs)

Creates an instance of this client using the provided credentials info.

See more:google.cloud.bigquery_storage_v1beta2.client.BigQueryWriteClient.from_service_account_info

google.cloud.bigquery_storage_v1beta2.client.BigQueryWriteClient.from_service_account_json

from_service_account_json(filename:str,*args,**kwargs)

Creates an instance of this client using the provided credentials file.

See more:google.cloud.bigquery_storage_v1beta2.client.BigQueryWriteClient.from_service_account_json

google.cloud.bigquery_storage_v1beta2.client.BigQueryWriteClient.get_mtls_endpoint_and_cert_source

get_mtls_endpoint_and_cert_source(client_options:typing.Optional[google.api_core.client_options.ClientOptions]=None,)

google.cloud.bigquery_storage_v1beta2.client.BigQueryWriteClient.get_write_stream

get_write_stream(request:typing.Optional[typing.Union[google.cloud.bigquery_storage_v1beta2.types.storage.GetWriteStreamRequest,dict,]]=None,*,name:typing.Optional[str]=None,retry:typing.Optional[typing.Union[google.api_core.retry.retry_unary.Retry,google.api_core.gapic_v1.method._MethodDefault,]]=_MethodDefault._DEFAULT_VALUE,timeout:typing.Union[float,object]=_MethodDefault._DEFAULT_VALUE,metadata:typing.Sequence[typing.Tuple[str,typing.Union[str,bytes]]]=())->google.cloud.bigquery_storage_v1beta2.types.stream.WriteStream

google.cloud.bigquery_storage_v1beta2.client.BigQueryWriteClient.parse_common_billing_account_path

parse_common_billing_account_path(path:str)->typing.Dict[str,str]

Parse a billing_account path into its component segments.

See more:google.cloud.bigquery_storage_v1beta2.client.BigQueryWriteClient.parse_common_billing_account_path

google.cloud.bigquery_storage_v1beta2.client.BigQueryWriteClient.parse_common_folder_path

parse_common_folder_path(path:str)->typing.Dict[str,str]

google.cloud.bigquery_storage_v1beta2.client.BigQueryWriteClient.parse_common_location_path

parse_common_location_path(path:str)->typing.Dict[str,str]

google.cloud.bigquery_storage_v1beta2.client.BigQueryWriteClient.parse_common_organization_path

parse_common_organization_path(path:str)->typing.Dict[str,str]

google.cloud.bigquery_storage_v1beta2.client.BigQueryWriteClient.parse_common_project_path

parse_common_project_path(path:str)->typing.Dict[str,str]

google.cloud.bigquery_storage_v1beta2.client.BigQueryWriteClient.parse_table_path

parse_table_path(path:str)->typing.Dict[str,str]

Parses a table path into its component segments.

See more:google.cloud.bigquery_storage_v1beta2.client.BigQueryWriteClient.parse_table_path

google.cloud.bigquery_storage_v1beta2.client.BigQueryWriteClient.parse_write_stream_path

parse_write_stream_path(path:str)->typing.Dict[str,str]

Parses a write_stream path into its component segments.

See more:google.cloud.bigquery_storage_v1beta2.client.BigQueryWriteClient.parse_write_stream_path

google.cloud.bigquery_storage_v1beta2.client.BigQueryWriteClient.table_path

table_path(project:str,dataset:str,table:str)->str

google.cloud.bigquery_storage_v1beta2.client.BigQueryWriteClient.write_stream_path

write_stream_path(project:str,dataset:str,table:str,stream:str)->str

Returns a fully-qualified write_stream string.

See more:google.cloud.bigquery_storage_v1beta2.client.BigQueryWriteClient.write_stream_path

google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadAsyncClient

BigQueryReadAsyncClient(*,credentials:typing.Optional[google.auth.credentials.Credentials]=None,transport:typing.Optional[typing.Union[str,google.cloud.bigquery_storage_v1beta2.services.big_query_read.transports.base.BigQueryReadTransport,typing.Callable[[...],google.cloud.bigquery_storage_v1beta2.services.big_query_read.transports.base.BigQueryReadTransport,],]]="grpc_asyncio",client_options:typing.Optional[google.api_core.client_options.ClientOptions]=None,client_info:google.api_core.gapic_v1.client_info.ClientInfo=google.api_core.gapic_v1.client_info.ClientInfo)

google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadAsyncClient.common_billing_account_path

common_billing_account_path(billing_account:str)->str

google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadAsyncClient.common_folder_path

common_folder_path(folder:str)->str

google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadAsyncClient.common_location_path

common_location_path(project:str,location:str)->str

google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadAsyncClient.common_organization_path

common_organization_path(organization:str)->str

google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadAsyncClient.common_project_path

common_project_path(project:str)->str

google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadAsyncClient.create_read_session

create_read_session(request:typing.Optional[typing.Union[google.cloud.bigquery_storage_v1beta2.types.storage.CreateReadSessionRequest,dict,]]=None,*,parent:typing.Optional[str]=None,read_session:typing.Optional[google.cloud.bigquery_storage_v1beta2.types.stream.ReadSession]=None,max_stream_count:typing.Optional[int]=None,retry:typing.Optional[typing.Union[google.api_core.retry.retry_unary_async.AsyncRetry,google.api_core.gapic_v1.method._MethodDefault,]]=_MethodDefault._DEFAULT_VALUE,timeout:typing.Union[float,object]=_MethodDefault._DEFAULT_VALUE,metadata:typing.Sequence[typing.Tuple[str,typing.Union[str,bytes]]]=())->google.cloud.bigquery_storage_v1beta2.types.stream.ReadSession

google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadAsyncClient.from_service_account_file

from_service_account_file(filename:str,*args,**kwargs)

Creates an instance of this client using the provided credentials file.

See more:google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadAsyncClient.from_service_account_file

google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadAsyncClient.from_service_account_info

from_service_account_info(info:dict,*args,**kwargs)

Creates an instance of this client using the provided credentials info.

See more:google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadAsyncClient.from_service_account_info

google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadAsyncClient.from_service_account_json

from_service_account_json(filename:str,*args,**kwargs)

Creates an instance of this client using the provided credentials file.

See more:google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadAsyncClient.from_service_account_json

google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadAsyncClient.get_mtls_endpoint_and_cert_source

get_mtls_endpoint_and_cert_source(client_options:typing.Optional[google.api_core.client_options.ClientOptions]=None,)

google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadAsyncClient.get_transport_class

get_transport_class(label:typing.Optional[str]=None,)->typing.Type[google.cloud.bigquery_storage_v1beta2.services.big_query_read.transports.base.BigQueryReadTransport]

google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadAsyncClient.parse_common_billing_account_path

parse_common_billing_account_path(path:str)->typing.Dict[str,str]

google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadAsyncClient.parse_common_folder_path

parse_common_folder_path(path:str)->typing.Dict[str,str]

google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadAsyncClient.parse_common_location_path

parse_common_location_path(path:str)->typing.Dict[str,str]

google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadAsyncClient.parse_common_organization_path

parse_common_organization_path(path:str)->typing.Dict[str,str]

google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadAsyncClient.parse_common_project_path

parse_common_project_path(path:str)->typing.Dict[str,str]

google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadAsyncClient.parse_read_session_path

parse_read_session_path(path:str)->typing.Dict[str,str]

google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadAsyncClient.parse_read_stream_path

parse_read_stream_path(path:str)->typing.Dict[str,str]

google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadAsyncClient.parse_table_path

parse_table_path(path:str)->typing.Dict[str,str]

google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadAsyncClient.read_rows

read_rows(request:typing.Optional[typing.Union[google.cloud.bigquery_storage_v1beta2.types.storage.ReadRowsRequest,dict]]=None,*,read_stream:typing.Optional[str]=None,offset:typing.Optional[int]=None,retry:typing.Optional[typing.Union[google.api_core.retry.retry_unary_async.AsyncRetry,google.api_core.gapic_v1.method._MethodDefault,]]=_MethodDefault._DEFAULT_VALUE,timeout:typing.Union[float,object]=_MethodDefault._DEFAULT_VALUE,metadata:typing.Sequence[typing.Tuple[str,typing.Union[str,bytes]]]=())->typing.Awaitable[typing.AsyncIterable[google.cloud.bigquery_storage_v1beta2.types.storage.ReadRowsResponse]]

Reads rows from the stream in the format prescribedby the ReadSession.

See more:google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadAsyncClient.read_rows

google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadAsyncClient.read_session_path

read_session_path(project:str,location:str,session:str)->str

google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadAsyncClient.read_stream_path

read_stream_path(project:str,location:str,session:str,stream:str)->str

google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadAsyncClient.split_read_stream

split_read_stream(request:typing.Optional[typing.Union[google.cloud.bigquery_storage_v1beta2.types.storage.SplitReadStreamRequest,dict,]]=None,*,retry:typing.Optional[typing.Union[google.api_core.retry.retry_unary_async.AsyncRetry,google.api_core.gapic_v1.method._MethodDefault,]]=_MethodDefault._DEFAULT_VALUE,timeout:typing.Union[float,object]=_MethodDefault._DEFAULT_VALUE,metadata:typing.Sequence[typing.Tuple[str,typing.Union[str,bytes]]]=())->google.cloud.bigquery_storage_v1beta2.types.storage.SplitReadStreamResponse

google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadAsyncClient.table_path

table_path(project:str,dataset:str,table:str)->str

google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadClient

BigQueryReadClient(*,credentials:typing.Optional[google.auth.credentials.Credentials]=None,transport:typing.Optional[typing.Union[str,google.cloud.bigquery_storage_v1beta2.services.big_query_read.transports.base.BigQueryReadTransport,typing.Callable[[...],google.cloud.bigquery_storage_v1beta2.services.big_query_read.transports.base.BigQueryReadTransport,],]]=None,client_options:typing.Optional[typing.Union[google.api_core.client_options.ClientOptions,dict]]=None,client_info:google.api_core.gapic_v1.client_info.ClientInfo=google.api_core.gapic_v1.client_info.ClientInfo)

google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadClient.__exit__

__exit__(type,value,traceback)

google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadClient.common_billing_account_path

common_billing_account_path(billing_account:str)->str

google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadClient.common_folder_path

common_folder_path(folder:str)->str

google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadClient.common_location_path

common_location_path(project:str,location:str)->str

google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadClient.common_organization_path

common_organization_path(organization:str)->str

google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadClient.common_project_path

common_project_path(project:str)->str

google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadClient.create_read_session

create_read_session(request:typing.Optional[typing.Union[google.cloud.bigquery_storage_v1beta2.types.storage.CreateReadSessionRequest,dict,]]=None,*,parent:typing.Optional[str]=None,read_session:typing.Optional[google.cloud.bigquery_storage_v1beta2.types.stream.ReadSession]=None,max_stream_count:typing.Optional[int]=None,retry:typing.Optional[typing.Union[google.api_core.retry.retry_unary.Retry,google.api_core.gapic_v1.method._MethodDefault,]]=_MethodDefault._DEFAULT_VALUE,timeout:typing.Union[float,object]=_MethodDefault._DEFAULT_VALUE,metadata:typing.Sequence[typing.Tuple[str,typing.Union[str,bytes]]]=())->google.cloud.bigquery_storage_v1beta2.types.stream.ReadSession

google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadClient.from_service_account_file

from_service_account_file(filename:str,*args,**kwargs)

Creates an instance of this client using the provided credentials file.

See more:google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadClient.from_service_account_file

google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadClient.from_service_account_info

from_service_account_info(info:dict,*args,**kwargs)

Creates an instance of this client using the provided credentials info.

See more:google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadClient.from_service_account_info

google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadClient.from_service_account_json

from_service_account_json(filename:str,*args,**kwargs)

Creates an instance of this client using the provided credentials file.

See more:google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadClient.from_service_account_json

google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadClient.get_mtls_endpoint_and_cert_source

get_mtls_endpoint_and_cert_source(client_options:typing.Optional[google.api_core.client_options.ClientOptions]=None,)

google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadClient.parse_common_billing_account_path

parse_common_billing_account_path(path:str)->typing.Dict[str,str]

google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadClient.parse_common_folder_path

parse_common_folder_path(path:str)->typing.Dict[str,str]

google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadClient.parse_common_location_path

parse_common_location_path(path:str)->typing.Dict[str,str]

google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadClient.parse_common_organization_path

parse_common_organization_path(path:str)->typing.Dict[str,str]

google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadClient.parse_common_project_path

parse_common_project_path(path:str)->typing.Dict[str,str]

google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadClient.parse_read_session_path

parse_read_session_path(path:str)->typing.Dict[str,str]

google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadClient.parse_read_stream_path

parse_read_stream_path(path:str)->typing.Dict[str,str]

google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadClient.parse_table_path

parse_table_path(path:str)->typing.Dict[str,str]

google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadClient.read_rows

read_rows(request:typing.Optional[typing.Union[google.cloud.bigquery_storage_v1beta2.types.storage.ReadRowsRequest,dict]]=None,*,read_stream:typing.Optional[str]=None,offset:typing.Optional[int]=None,retry:typing.Optional[typing.Union[google.api_core.retry.retry_unary.Retry,google.api_core.gapic_v1.method._MethodDefault,]]=_MethodDefault._DEFAULT_VALUE,timeout:typing.Union[float,object]=_MethodDefault._DEFAULT_VALUE,metadata:typing.Sequence[typing.Tuple[str,typing.Union[str,bytes]]]=())->typing.Iterable[google.cloud.bigquery_storage_v1beta2.types.storage.ReadRowsResponse]

Reads rows from the stream in the format prescribedby the ReadSession.

See more:google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadClient.read_rows

google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadClient.read_session_path

read_session_path(project:str,location:str,session:str)->str

google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadClient.read_stream_path

read_stream_path(project:str,location:str,session:str,stream:str)->str

google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadClient.split_read_stream

split_read_stream(request:typing.Optional[typing.Union[google.cloud.bigquery_storage_v1beta2.types.storage.SplitReadStreamRequest,dict,]]=None,*,retry:typing.Optional[typing.Union[google.api_core.retry.retry_unary.Retry,google.api_core.gapic_v1.method._MethodDefault,]]=_MethodDefault._DEFAULT_VALUE,timeout:typing.Union[float,object]=_MethodDefault._DEFAULT_VALUE,metadata:typing.Sequence[typing.Tuple[str,typing.Union[str,bytes]]]=())->google.cloud.bigquery_storage_v1beta2.types.storage.SplitReadStreamResponse

google.cloud.bigquery_storage_v1beta2.services.big_query_read.BigQueryReadClient.table_path

table_path(project:str,dataset:str,table:str)->str

google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteAsyncClient

BigQueryWriteAsyncClient(*,credentials:typing.Optional[google.auth.credentials.Credentials]=None,transport:typing.Optional[typing.Union[str,google.cloud.bigquery_storage_v1beta2.services.big_query_write.transports.base.BigQueryWriteTransport,typing.Callable[[...],google.cloud.bigquery_storage_v1beta2.services.big_query_write.transports.base.BigQueryWriteTransport,],]]="grpc_asyncio",client_options:typing.Optional[google.api_core.client_options.ClientOptions]=None,client_info:google.api_core.gapic_v1.client_info.ClientInfo=google.api_core.gapic_v1.client_info.ClientInfo)

google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteAsyncClient.append_rows

append_rows(requests:typing.Optional[typing.AsyncIterator[google.cloud.bigquery_storage_v1beta2.types.storage.AppendRowsRequest]]=None,*,retry:typing.Optional[typing.Union[google.api_core.retry.retry_unary_async.AsyncRetry,google.api_core.gapic_v1.method._MethodDefault,]]=_MethodDefault._DEFAULT_VALUE,timeout:typing.Union[float,object]=_MethodDefault._DEFAULT_VALUE,metadata:typing.Sequence[typing.Tuple[str,typing.Union[str,bytes]]]=())->typing.Awaitable[typing.AsyncIterable[google.cloud.bigquery_storage_v1beta2.types.storage.AppendRowsResponse]]

google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteAsyncClient.batch_commit_write_streams

batch_commit_write_streams(request:typing.Optional[typing.Union[google.cloud.bigquery_storage_v1beta2.types.storage.BatchCommitWriteStreamsRequest,dict,]]=None,*,parent:typing.Optional[str]=None,retry:typing.Optional[typing.Union[google.api_core.retry.retry_unary_async.AsyncRetry,google.api_core.gapic_v1.method._MethodDefault,]]=_MethodDefault._DEFAULT_VALUE,timeout:typing.Union[float,object]=_MethodDefault._DEFAULT_VALUE,metadata:typing.Sequence[typing.Tuple[str,typing.Union[str,bytes]]]=())->(google.cloud.bigquery_storage_v1beta2.types.storage.BatchCommitWriteStreamsResponse)

Atomically commits a group ofPENDING streams that belong tothe sameparent table.

See more:google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteAsyncClient.batch_commit_write_streams

google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteAsyncClient.common_billing_account_path

common_billing_account_path(billing_account:str)->str

google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteAsyncClient.common_folder_path

common_folder_path(folder:str)->str

google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteAsyncClient.common_location_path

common_location_path(project:str,location:str)->str

google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteAsyncClient.common_organization_path

common_organization_path(organization:str)->str

google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteAsyncClient.common_project_path

common_project_path(project:str)->str

google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteAsyncClient.create_write_stream

create_write_stream(request:typing.Optional[typing.Union[google.cloud.bigquery_storage_v1beta2.types.storage.CreateWriteStreamRequest,dict,]]=None,*,parent:typing.Optional[str]=None,write_stream:typing.Optional[google.cloud.bigquery_storage_v1beta2.types.stream.WriteStream]=None,retry:typing.Optional[typing.Union[google.api_core.retry.retry_unary_async.AsyncRetry,google.api_core.gapic_v1.method._MethodDefault,]]=_MethodDefault._DEFAULT_VALUE,timeout:typing.Union[float,object]=_MethodDefault._DEFAULT_VALUE,metadata:typing.Sequence[typing.Tuple[str,typing.Union[str,bytes]]]=())->google.cloud.bigquery_storage_v1beta2.types.stream.WriteStream

google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteAsyncClient.finalize_write_stream

finalize_write_stream(request:typing.Optional[typing.Union[google.cloud.bigquery_storage_v1beta2.types.storage.FinalizeWriteStreamRequest,dict,]]=None,*,name:typing.Optional[str]=None,retry:typing.Optional[typing.Union[google.api_core.retry.retry_unary_async.AsyncRetry,google.api_core.gapic_v1.method._MethodDefault,]]=_MethodDefault._DEFAULT_VALUE,timeout:typing.Union[float,object]=_MethodDefault._DEFAULT_VALUE,metadata:typing.Sequence[typing.Tuple[str,typing.Union[str,bytes]]]=())->google.cloud.bigquery_storage_v1beta2.types.storage.FinalizeWriteStreamResponse

Finalize a write stream so that no new data can be appended tothe stream.

See more:google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteAsyncClient.finalize_write_stream

google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteAsyncClient.flush_rows

flush_rows(request:typing.Optional[typing.Union[google.cloud.bigquery_storage_v1beta2.types.storage.FlushRowsRequest,dict]]=None,*,write_stream:typing.Optional[str]=None,retry:typing.Optional[typing.Union[google.api_core.retry.retry_unary_async.AsyncRetry,google.api_core.gapic_v1.method._MethodDefault,]]=_MethodDefault._DEFAULT_VALUE,timeout:typing.Union[float,object]=_MethodDefault._DEFAULT_VALUE,metadata:typing.Sequence[typing.Tuple[str,typing.Union[str,bytes]]]=())->google.cloud.bigquery_storage_v1beta2.types.storage.FlushRowsResponse

google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteAsyncClient.from_service_account_file

from_service_account_file(filename:str,*args,**kwargs)

Creates an instance of this client using the provided credentials file.

See more:google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteAsyncClient.from_service_account_file

google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteAsyncClient.from_service_account_info

from_service_account_info(info:dict,*args,**kwargs)

Creates an instance of this client using the provided credentials info.

See more:google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteAsyncClient.from_service_account_info

google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteAsyncClient.from_service_account_json

from_service_account_json(filename:str,*args,**kwargs)

Creates an instance of this client using the provided credentials file.

See more:google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteAsyncClient.from_service_account_json

google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteAsyncClient.get_mtls_endpoint_and_cert_source

get_mtls_endpoint_and_cert_source(client_options:typing.Optional[google.api_core.client_options.ClientOptions]=None,)

google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteAsyncClient.get_transport_class

get_transport_class(label:typing.Optional[str]=None,)->typing.Type[google.cloud.bigquery_storage_v1beta2.services.big_query_write.transports.base.BigQueryWriteTransport]

google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteAsyncClient.get_write_stream

get_write_stream(request:typing.Optional[typing.Union[google.cloud.bigquery_storage_v1beta2.types.storage.GetWriteStreamRequest,dict,]]=None,*,name:typing.Optional[str]=None,retry:typing.Optional[typing.Union[google.api_core.retry.retry_unary_async.AsyncRetry,google.api_core.gapic_v1.method._MethodDefault,]]=_MethodDefault._DEFAULT_VALUE,timeout:typing.Union[float,object]=_MethodDefault._DEFAULT_VALUE,metadata:typing.Sequence[typing.Tuple[str,typing.Union[str,bytes]]]=())->google.cloud.bigquery_storage_v1beta2.types.stream.WriteStream

google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteAsyncClient.parse_common_billing_account_path

parse_common_billing_account_path(path:str)->typing.Dict[str,str]

google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteAsyncClient.parse_common_folder_path

parse_common_folder_path(path:str)->typing.Dict[str,str]

google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteAsyncClient.parse_common_location_path

parse_common_location_path(path:str)->typing.Dict[str,str]

google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteAsyncClient.parse_common_organization_path

parse_common_organization_path(path:str)->typing.Dict[str,str]

google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteAsyncClient.parse_common_project_path

parse_common_project_path(path:str)->typing.Dict[str,str]

google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteAsyncClient.parse_table_path

parse_table_path(path:str)->typing.Dict[str,str]

google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteAsyncClient.parse_write_stream_path

parse_write_stream_path(path:str)->typing.Dict[str,str]

google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteAsyncClient.table_path

table_path(project:str,dataset:str,table:str)->str

google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteAsyncClient.write_stream_path

write_stream_path(project:str,dataset:str,table:str,stream:str)->str

google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteClient

BigQueryWriteClient(*,credentials:typing.Optional[google.auth.credentials.Credentials]=None,transport:typing.Optional[typing.Union[str,google.cloud.bigquery_storage_v1beta2.services.big_query_write.transports.base.BigQueryWriteTransport,typing.Callable[[...],google.cloud.bigquery_storage_v1beta2.services.big_query_write.transports.base.BigQueryWriteTransport,],]]=None,client_options:typing.Optional[typing.Union[google.api_core.client_options.ClientOptions,dict]]=None,client_info:google.api_core.gapic_v1.client_info.ClientInfo=google.api_core.gapic_v1.client_info.ClientInfo)

google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteClient.__exit__

__exit__(type,value,traceback)

google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteClient.append_rows

append_rows(requests:typing.Optional[typing.Iterator[google.cloud.bigquery_storage_v1beta2.types.storage.AppendRowsRequest]]=None,*,retry:typing.Optional[typing.Union[google.api_core.retry.retry_unary.Retry,google.api_core.gapic_v1.method._MethodDefault,]]=_MethodDefault._DEFAULT_VALUE,timeout:typing.Union[float,object]=_MethodDefault._DEFAULT_VALUE,metadata:typing.Sequence[typing.Tuple[str,typing.Union[str,bytes]]]=())->typing.Iterable[google.cloud.bigquery_storage_v1beta2.types.storage.AppendRowsResponse]

google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteClient.batch_commit_write_streams

batch_commit_write_streams(request:typing.Optional[typing.Union[google.cloud.bigquery_storage_v1beta2.types.storage.BatchCommitWriteStreamsRequest,dict,]]=None,*,parent:typing.Optional[str]=None,retry:typing.Optional[typing.Union[google.api_core.retry.retry_unary.Retry,google.api_core.gapic_v1.method._MethodDefault,]]=_MethodDefault._DEFAULT_VALUE,timeout:typing.Union[float,object]=_MethodDefault._DEFAULT_VALUE,metadata:typing.Sequence[typing.Tuple[str,typing.Union[str,bytes]]]=())->(google.cloud.bigquery_storage_v1beta2.types.storage.BatchCommitWriteStreamsResponse)

Atomically commits a group ofPENDING streams that belong tothe sameparent table.

See more:google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteClient.batch_commit_write_streams

google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteClient.common_billing_account_path

common_billing_account_path(billing_account:str)->str

google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteClient.common_folder_path

common_folder_path(folder:str)->str

google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteClient.common_location_path

common_location_path(project:str,location:str)->str

google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteClient.common_organization_path

common_organization_path(organization:str)->str

google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteClient.common_project_path

common_project_path(project:str)->str

google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteClient.create_write_stream

create_write_stream(request:typing.Optional[typing.Union[google.cloud.bigquery_storage_v1beta2.types.storage.CreateWriteStreamRequest,dict,]]=None,*,parent:typing.Optional[str]=None,write_stream:typing.Optional[google.cloud.bigquery_storage_v1beta2.types.stream.WriteStream]=None,retry:typing.Optional[typing.Union[google.api_core.retry.retry_unary.Retry,google.api_core.gapic_v1.method._MethodDefault,]]=_MethodDefault._DEFAULT_VALUE,timeout:typing.Union[float,object]=_MethodDefault._DEFAULT_VALUE,metadata:typing.Sequence[typing.Tuple[str,typing.Union[str,bytes]]]=())->google.cloud.bigquery_storage_v1beta2.types.stream.WriteStream

google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteClient.finalize_write_stream

finalize_write_stream(request:typing.Optional[typing.Union[google.cloud.bigquery_storage_v1beta2.types.storage.FinalizeWriteStreamRequest,dict,]]=None,*,name:typing.Optional[str]=None,retry:typing.Optional[typing.Union[google.api_core.retry.retry_unary.Retry,google.api_core.gapic_v1.method._MethodDefault,]]=_MethodDefault._DEFAULT_VALUE,timeout:typing.Union[float,object]=_MethodDefault._DEFAULT_VALUE,metadata:typing.Sequence[typing.Tuple[str,typing.Union[str,bytes]]]=())->google.cloud.bigquery_storage_v1beta2.types.storage.FinalizeWriteStreamResponse

Finalize a write stream so that no new data can be appended tothe stream.

See more:google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteClient.finalize_write_stream

google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteClient.flush_rows

flush_rows(request:typing.Optional[typing.Union[google.cloud.bigquery_storage_v1beta2.types.storage.FlushRowsRequest,dict]]=None,*,write_stream:typing.Optional[str]=None,retry:typing.Optional[typing.Union[google.api_core.retry.retry_unary.Retry,google.api_core.gapic_v1.method._MethodDefault,]]=_MethodDefault._DEFAULT_VALUE,timeout:typing.Union[float,object]=_MethodDefault._DEFAULT_VALUE,metadata:typing.Sequence[typing.Tuple[str,typing.Union[str,bytes]]]=())->google.cloud.bigquery_storage_v1beta2.types.storage.FlushRowsResponse

google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteClient.from_service_account_file

from_service_account_file(filename:str,*args,**kwargs)

Creates an instance of this client using the provided credentials file.

See more:google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteClient.from_service_account_file

google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteClient.from_service_account_info

from_service_account_info(info:dict,*args,**kwargs)

Creates an instance of this client using the provided credentials info.

See more:google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteClient.from_service_account_info

google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteClient.from_service_account_json

from_service_account_json(filename:str,*args,**kwargs)

Creates an instance of this client using the provided credentials file.

See more:google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteClient.from_service_account_json

google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteClient.get_mtls_endpoint_and_cert_source

get_mtls_endpoint_and_cert_source(client_options:typing.Optional[google.api_core.client_options.ClientOptions]=None,)

google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteClient.get_write_stream

get_write_stream(request:typing.Optional[typing.Union[google.cloud.bigquery_storage_v1beta2.types.storage.GetWriteStreamRequest,dict,]]=None,*,name:typing.Optional[str]=None,retry:typing.Optional[typing.Union[google.api_core.retry.retry_unary.Retry,google.api_core.gapic_v1.method._MethodDefault,]]=_MethodDefault._DEFAULT_VALUE,timeout:typing.Union[float,object]=_MethodDefault._DEFAULT_VALUE,metadata:typing.Sequence[typing.Tuple[str,typing.Union[str,bytes]]]=())->google.cloud.bigquery_storage_v1beta2.types.stream.WriteStream

google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteClient.parse_common_billing_account_path

parse_common_billing_account_path(path:str)->typing.Dict[str,str]

google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteClient.parse_common_folder_path

parse_common_folder_path(path:str)->typing.Dict[str,str]

google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteClient.parse_common_location_path

parse_common_location_path(path:str)->typing.Dict[str,str]

google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteClient.parse_common_organization_path

parse_common_organization_path(path:str)->typing.Dict[str,str]

google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteClient.parse_common_project_path

parse_common_project_path(path:str)->typing.Dict[str,str]

google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteClient.parse_table_path

parse_table_path(path:str)->typing.Dict[str,str]

google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteClient.parse_write_stream_path

parse_write_stream_path(path:str)->typing.Dict[str,str]

google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteClient.table_path

table_path(project:str,dataset:str,table:str)->str

google.cloud.bigquery_storage_v1beta2.services.big_query_write.BigQueryWriteClient.write_stream_path

write_stream_path(project:str,dataset:str,table:str,stream:str)->str

google.cloud.bigquery_storage_v1beta2.writer.AppendRowsFuture.add_done_callback

add_done_callback(fn)

Add a callback to be executed when the operation is complete.

See more:google.cloud.bigquery_storage_v1beta2.writer.AppendRowsFuture.add_done_callback

google.cloud.bigquery_storage_v1beta2.writer.AppendRowsFuture.cancel

cancel()

Stops pulling messages and shutdowns the background thread consuming messages.

See more:google.cloud.bigquery_storage_v1beta2.writer.AppendRowsFuture.cancel

google.cloud.bigquery_storage_v1beta2.writer.AppendRowsFuture.cancelled

cancelled()

google.cloud.bigquery_storage_v1beta2.writer.AppendRowsFuture.done

done(retry:typing.Optional[google.api_core.retry.retry_unary.Retry]=None,)->bool

google.cloud.bigquery_storage_v1beta2.writer.AppendRowsFuture.exception

exception(timeout=object)

Get the exception from the operation, blocking if necessary.

See more:google.cloud.bigquery_storage_v1beta2.writer.AppendRowsFuture.exception

google.cloud.bigquery_storage_v1beta2.writer.AppendRowsFuture.result

result(timeout=object,retry=None,polling=None)

google.cloud.bigquery_storage_v1beta2.writer.AppendRowsFuture.running

running()

True if the operation is currently running.

See more:google.cloud.bigquery_storage_v1beta2.writer.AppendRowsFuture.running

google.cloud.bigquery_storage_v1beta2.writer.AppendRowsFuture.set_exception

set_exception(exception)

Set the result of the future as being the given exception.

See more:google.cloud.bigquery_storage_v1beta2.writer.AppendRowsFuture.set_exception

google.cloud.bigquery_storage_v1beta2.writer.AppendRowsFuture.set_result

set_result(result)

Set the return value of work associated with the future.

See more:google.cloud.bigquery_storage_v1beta2.writer.AppendRowsFuture.set_result

google.cloud.bigquery_storage_v1beta2.writer.AppendRowsStream

AppendRowsStream(client:google.cloud.bigquery_storage_v1beta2.services.big_query_write.client.BigQueryWriteClient,initial_request_template:google.cloud.bigquery_storage_v1beta2.types.storage.AppendRowsRequest,metadata:typing.Sequence[typing.Tuple[str,str]]=(),)

google.cloud.bigquery_storage_v1beta2.writer.AppendRowsStream.add_close_callback

add_close_callback(callback:typing.Callable)

Schedules a callable when the manager closes.

See more:google.cloud.bigquery_storage_v1beta2.writer.AppendRowsStream.add_close_callback

google.cloud.bigquery_storage_v1beta2.writer.AppendRowsStream.close

close(reason:typing.Optional[Exception]=None)

Stop consuming messages and shutdown all helper threads.

See more:google.cloud.bigquery_storage_v1beta2.writer.AppendRowsStream.close

google.cloud.bigquery_storage_v1beta2.writer.AppendRowsStream.send

send(request:google.cloud.bigquery_storage_v1beta2.types.storage.AppendRowsRequest,)->google.cloud.bigquery_storage_v1beta2.writer.AppendRowsFuture

Send an append rows request to the open stream.

See more:google.cloud.bigquery_storage_v1beta2.writer.AppendRowsStream.send

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2025-12-04 UTC.