Generate text by using the AI.GENERATE_TEXT function
This document shows you how to create a BigQuery MLremote modelthat represents a Vertex AI model, and then use that remote modelwith theAI.GENERATE_TEXT functionto generate text.
The following types of remote models are supported:
- Remote modelsover any of thegenerally availableorpreviewGemini models.
- Remote modelsoverAnthropic Claude models.
- Remote modelsoverLlama models
- Remote modelsoverMistral AI models
- Remote modelsoversupported open models.
Depending on the Vertex AI model that you choose, you cangenerate text based on unstructured data input fromobject tables or text input fromstandard tables.
Required roles
To create a remote model and generate text, you need thefollowing Identity and Access Management (IAM) roles:
- Create and use BigQuery datasets, tables, and models:BigQuery Data Editor (
roles/bigquery.dataEditor) on your project. Create, delegate, and use BigQuery connections:BigQuery Connections Admin (
roles/bigquery.connectionsAdmin) on yourproject.If you don't have adefault connectionconfigured, you can create and set one as part of running the
CREATE MODELstatement. To do so, you must have BigQuery Admin(roles/bigquery.admin) on your project. For more information, seeConfigure the default connection.Grant permissions to the connection's service account: Project IAM Admin(
roles/resourcemanager.projectIamAdmin) on the project that contains theVertex AI endpoint. This is the current project for remote modelsthat you create by specifying the model name as an endpoint. This is theproject identified in the URL for remote models that you create byspecifying a URL as an endpoint.If you use the remote model to analyze unstructured data from an objecttable, and the Cloud Storage bucket that you use in the object table isin a different project than your Vertex AI endpoint, you mustalso have Storage Admin (
roles/storage.admin) on theCloud Storage bucket used by the object table.Create BigQuery jobs: BigQuery Job User(
roles/bigquery.jobUser) on your project.
These predefined roles contain the permissions required to perform the tasks inthis document. To see the exact permissions that are required, expand theRequired permissions section:
Required permissions
- Create a dataset:
bigquery.datasets.create - Create, delegate, and use a connection:
bigquery.connections.* - Set service account permissions:
resourcemanager.projects.getIamPolicyandresourcemanager.projects.setIamPolicy - Create a model and run inference:
bigquery.jobs.createbigquery.models.createbigquery.models.getDatabigquery.models.updateDatabigquery.models.updateMetadata
You might also be able to get these permissions withcustom roles or otherpredefined roles.
Before you begin
In the Google Cloud console, on the project selector page, select or create a Google Cloud project.
Note: If you don't plan to keep the resources that you create in this procedure, create a project instead of selecting an existing project. After you finish these steps, you can delete the project, removing all resources associated with the project.Roles required to select or create a project
- Select a project: Selecting a project doesn't require a specific IAM role—you can select any project that you've been granted a role on.
- Create a project: To create a project, you need the Project Creator role (
roles/resourcemanager.projectCreator), which contains theresourcemanager.projects.createpermission.Learn how to grant roles.
Verify that billing is enabled for your Google Cloud project.
Enable the BigQuery, BigQuery Connection, and Vertex AI APIs.
Roles required to enable APIs
To enable APIs, you need the Service Usage Admin IAM role (
roles/serviceusage.serviceUsageAdmin), which contains theserviceusage.services.enablepermission.Learn how to grant roles.
Create a dataset
Create a BigQuery dataset to contain your resources:
Console
In the Google Cloud console, go to theBigQuery page.
In the left pane, clickExplorer:

If you don't see the left pane, clickExpand left pane to open the pane.
In theExplorer pane, click your project name.
ClickView actions > Create dataset.
On theCreate dataset page, do the following:
ForDataset ID, type a name for the dataset.
ForLocation type, selectRegion orMulti-region.
- If you selectedRegion, then select a location from theRegion list.
- If you selectedMulti-region, then selectUS orEuropefrom theMulti-region list.
ClickCreate dataset.
bq
Create a connection
Create aCloud resource connectionfor the remote model to use, and get the connection's service account.Create the connection in the samelocation as thedataset that you created in the previous step.
You can skip this step if you either have a default connection configured, oryou have the BigQuery Admin role.
Select one of the following options:Console
Go to theBigQuery page.
In the left pane, clickExplorer:

If you don't see the left pane, clickExpand left pane to open the pane.
In theExplorer pane, expand your project name, and then clickConnections.
On theConnections page, clickCreate connection.
ForConnection type, chooseVertex AI remote models, remotefunctions, BigLake and Spanner (Cloud Resource).
In theConnection ID field, enter a name for your connection.
ForLocation type, select a location for your connection. Theconnection should be colocated with your other resources such asdatasets.
ClickCreate connection.
ClickGo to connection.
In theConnection info pane, copy the service account ID for use ina later step.
bq
In a command-line environment, create a connection:
bqmk--connection--location=REGION--project_id=PROJECT_ID\--connection_type=CLOUD_RESOURCECONNECTION_ID
The
--project_idparameter overrides the default project.Replace the following:
REGION: yourconnection regionPROJECT_ID: your Google Cloud project IDCONNECTION_ID: an ID for yourconnection
When you create a connection resource, BigQuery creates aunique system service account and associates it with the connection.
Troubleshooting: If you get the following connection error,update the Google Cloud SDK:
Flags parsing error: flag --connection_type=CLOUD_RESOURCE: value should be one of...
Retrieve and copy the service account ID for use in a laterstep:
bqshow--connectionPROJECT_ID.REGION.CONNECTION_ID
The output is similar to the following:
name properties1234.REGION.CONNECTION_ID {"serviceAccountId": "connection-1234-9u56h9@gcp-sa-bigquery-condel.iam.gserviceaccount.com"}
Python
Before trying this sample, follow thePython setup instructions in theBigQuery quickstart using client libraries. For more information, see theBigQueryPython API reference documentation.
To authenticate to BigQuery, set up Application Default Credentials. For more information, seeSet up authentication for client libraries.
importgoogle.api_core.exceptionsfromgoogle.cloudimportbigquery_connection_v1client=bigquery_connection_v1.ConnectionServiceClient()defcreate_connection(project_id:str,location:str,connection_id:str,):"""Creates a BigQuery connection to a Cloud Resource. Cloud Resource connection creates a service account which can then be granted access to other Google Cloud resources for federated queries. Args: project_id: The Google Cloud project ID. location: The location of the connection (for example, "us-central1"). connection_id: The ID of the connection to create. """parent=client.common_location_path(project_id,location)connection=bigquery_connection_v1.Connection(friendly_name="Example Connection",description="A sample connection for a Cloud Resource.",cloud_resource=bigquery_connection_v1.CloudResourceProperties(),)try:created_connection=client.create_connection(parent=parent,connection_id=connection_id,connection=connection)print(f"Successfully created connection:{created_connection.name}")print(f"Friendly name:{created_connection.friendly_name}")print(f"Service Account:{created_connection.cloud_resource.service_account_id}")exceptgoogle.api_core.exceptions.AlreadyExists:print(f"Connection with ID '{connection_id}' already exists.")print("Please use a different connection ID.")exceptExceptionase:print(f"An unexpected error occurred while creating the connection:{e}")Node.js
Before trying this sample, follow theNode.js setup instructions in theBigQuery quickstart using client libraries. For more information, see theBigQueryNode.js API reference documentation.
To authenticate to BigQuery, set up Application Default Credentials. For more information, seeSet up authentication for client libraries.
const{ConnectionServiceClient}=require('@google-cloud/bigquery-connection').v1;const{status}=require('@grpc/grpc-js');constclient=newConnectionServiceClient();/** * Creates a new BigQuery connection to a Cloud Resource. * * A Cloud Resource connection creates a service account that can be granted access * to other Google Cloud resources. * * @param {string} projectId The Google Cloud project ID. for example, 'example-project-id' * @param {string} location The location of the project to create the connection in. for example, 'us-central1' * @param {string} connectionId The ID of the connection to create. for example, 'example-connection-id' */asyncfunctioncreateConnection(projectId,location,connectionId){constparent=client.locationPath(projectId,location);constconnection={friendlyName:'Example Connection',description:'A sample connection for a Cloud Resource',// The service account for this cloudResource will be created by the API.// Its ID will be available in the response.cloudResource:{},};constrequest={parent,connectionId,connection,};try{const[response]=awaitclient.createConnection(request);console.log(`Successfully created connection:${response.name}`);console.log(`Friendly name:${response.friendlyName}`);console.log(`Service Account:${response.cloudResource.serviceAccountId}`);}catch(err){if(err.code===status.ALREADY_EXISTS){console.log(`Connection '${connectionId}' already exists.`);}else{console.error(`Error creating connection:${err.message}`);}}}Terraform
Use thegoogle_bigquery_connectionresource.
To authenticate to BigQuery, set up Application DefaultCredentials. For more information, seeSet up authentication for client libraries.
The following example creates a Cloud resource connection namedmy_cloud_resource_connection in theUS region:
# This queries the provider for project information.data "google_project" "default" {}# This creates a cloud resource connection in the US region named my_cloud_resource_connection.# Note: The cloud resource nested object has only one output field - serviceAccountId.resource "google_bigquery_connection" "default" { connection_id = "my_cloud_resource_connection" project = data.google_project.default.project_id location = "US" cloud_resource {}}To apply your Terraform configuration in a Google Cloud project, complete the steps in the following sections.
Prepare Cloud Shell
- LaunchCloud Shell.
Set the default Google Cloud project where you want to apply your Terraform configurations.
You only need to run this command once per project, and you can run it in any directory.
export GOOGLE_CLOUD_PROJECT=PROJECT_ID
Environment variables are overridden if you set explicit values in the Terraform configuration file.
Prepare the directory
Each Terraform configuration file must have its own directory (alsocalled aroot module).
- InCloud Shell, create a directory and a new file within that directory. The filename must have the
.tfextension—for examplemain.tf. In this tutorial, the file is referred to asmain.tf.mkdirDIRECTORY && cdDIRECTORY && touch main.tf
If you are following a tutorial, you can copy the sample code in each section or step.
Copy the sample code into the newly created
main.tf.Optionally, copy the code from GitHub. This is recommended when the Terraform snippet is part of an end-to-end solution.
- Review and modify the sample parameters to apply to your environment.
- Save your changes.
- Initialize Terraform. You only need to do this once per directory.
terraform init
Optionally, to use the latest Google provider version, include the
-upgradeoption:terraform init -upgrade
Apply the changes
- Review the configuration and verify that the resources that Terraform is going to create or update match your expectations:
terraform plan
Make corrections to the configuration as necessary.
- Apply the Terraform configuration by running the following command and entering
yesat the prompt:terraform apply
Wait until Terraform displays the "Apply complete!" message.
- Open your Google Cloud project to view the results. In the Google Cloud console, navigate to your resources in the UI to make sure that Terraform has created or updated them.
Grant a role to the remote model connection's service account
You must grant the Vertex AI User role to the service account of the connectionthat the remote model uses.
If you plan to specify the remote model's endpoint as a URL,for exampleendpoint = 'https://us-central1-aiplatform.googleapis.com/v1/projects/myproject/locations/us-central1/publishers/google/models/gemini-2.0-flash',grant this role in the same project you specify in the URL.
If you plan to specify the remote model's endpoint by using the model name,for exampleendpoint = 'gemini-2.0-flash', grant this rolein the same project where you plan to create the remote model.
Granting the role in a different project results in the errorbqcx-1234567890-wxyz@gcp-sa-bigquery-condel.iam.gserviceaccount.com does not have the permission to access resource.
To grant the Vertex AI User role, follow these steps:
Console
Go to theIAM & Admin page.
ClickAdd.
TheAdd principals dialog opens.
In theNew principals field, enter the service account ID that youcopied earlier.
In theSelect a role field, selectVertex AI, and then selectVertex AI User.
ClickSave.
gcloud
Use thegcloud projects add-iam-policy-binding command.
gcloud projects add-iam-policy-binding 'PROJECT_NUMBER' --member='serviceAccount:MEMBER' --role='roles/aiplatform.user' --condition=None
Replace the following:
PROJECT_NUMBER: your project numberMEMBER: the service account ID that you copied earlier
Grant a role to the object table connection's service account
If you are using the remote model to generate text from object table data, grantthe object table connection's service account the Vertex AI User rolein the same project where you plan to create the remote model.Otherwise, you can skip this step.
To find the service account for the object table connection, follow thesesteps:
Go to theBigQuery page.
In the left pane, clickExplorer:

If you don't see the left pane, clickExpand left pane to open the pane.
In theExplorer pane, clickDatasets, and then select a dataset thatcontains the object table.
ClickOverview> Tables, and then select the object table.
In the editor pane, click theDetails tab.
Note the connection name in theConnection ID field.
In theExplorer pane, clickConnections.
Select the connection that matches the one from the object table'sConnection ID field.
Copy the value in theService account id field.
To grant the role, follow these steps:
Console
Go to theIAM & Admin page.
ClickAdd.
TheAdd principals dialog opens.
In theNew principals field, enter the service account ID that youcopied earlier.
In theSelect a role field, selectVertex AI, and then selectVertex AI User.
ClickSave.
gcloud
Use thegcloud projects add-iam-policy-binding command.
gcloud projects add-iam-policy-binding 'PROJECT_NUMBER' --member='serviceAccount:MEMBER' --role='roles/aiplatform.user' --condition=None
Replace the following:
PROJECT_NUMBER: your project numberMEMBER: the service account ID that you copied earlier
Enable a partner model
This step is only required if you want to use Anthropic Claude, Llama, orMistral AI models.
In the Google Cloud console, go to the Vertex AIModel Gardenpage.
Search or browse for the partner model that you want to use.
Click the model card.
On the model page, clickEnable.
Fill out the requested enablement information, and then clickNext.
In theTerms and conditions section, select the checkbox.
ClickAgree to agree to the terms and conditions and enable the model.
Choose an open model deployment method
If you are creating a remote model over asupported open model,you can automatically deploy the open model at the same time thatyou create the remote model by specifying the Vertex AIModel Garden or Hugging Face model ID in theCREATE MODEL statement.Alternatively, you can manually deploy the open model first, and then use thatopen model with the remote model by specifying the modelendpoint in theCREATE MODEL statement. For more information, seeDeploy open models.
Create a BigQuery ML remote model
Create a remote model:
New open models
Preview This product or feature is subject to the "Pre-GA Offerings Terms" in the General Service Terms section of theService Specific Terms. Pre-GA products and features are available "as is" and might have limited support. For more information, see thelaunch stage descriptions.
In the Google Cloud console, go to theBigQuery page.
Using the SQL editor, create aremote model:
CREATEORREPLACEMODEL`PROJECT_ID.DATASET_ID.MODEL_NAME`REMOTEWITHCONNECTION{DEFAULT|`PROJECT_ID.REGION.CONNECTION_ID`}OPTIONS({HUGGING_FACE_MODEL_ID='HUGGING_FACE_MODEL_ID'|MODEL_GARDEN_MODEL_NAME='MODEL_GARDEN_MODEL_NAME'}[,HUGGING_FACE_TOKEN='HUGGING_FACE_TOKEN'][,MACHINE_TYPE='MACHINE_TYPE'][,MIN_REPLICA_COUNT=MIN_REPLICA_COUNT][,MAX_REPLICA_COUNT=MAX_REPLICA_COUNT][,RESERVATION_AFFINITY_TYPE={'NO_RESERVATION'|'ANY_RESERVATION'|'SPECIFIC_RESERVATION'}][,RESERVATION_AFFINITY_KEY='compute.googleapis.com/reservation-name'][,RESERVATION_AFFINITY_VALUES=RESERVATION_AFFINITY_VALUES][,ENDPOINT_IDLE_TTL=ENDPOINT_IDLE_TTL]);
Replace the following:
PROJECT_ID: your project ID.DATASET_ID: the ID of the dataset to contain the model. This dataset must be in the samelocation as the connection that you are using.MODEL_NAME: the name of the model.REGION: the region used by the connection.CONNECTION_ID: the ID of your BigQuery connection.You can get this value byviewing the connection details in the Google Cloud console and copying the value in the last section of the fully qualified connection ID that is shown inConnection ID. For example,
projects/myproject/locations/connection_location/connections/myconnection.HUGGING_FACE_MODEL_ID: aSTRINGvalue that specifies the model ID for asupported Hugging Face model, in the formatprovider_name/model_name. For example,deepseek-ai/DeepSeek-R1. You can get the model ID by clicking the model name in the Hugging Face Model Hub and then copying the model ID from the top of the model card.MODEL_GARDEN_MODEL_NAME: aSTRINGvalue that specifies the model ID and model version of asupported Vertex AI Model Garden model, in the formatpublishers/publisher/models/model_name@model_version. For example,publishers/openai/models/gpt-oss@gpt-oss-120b. You can get the model ID by clicking the model card in the Vertex AI Model Garden and then copying the model ID from theModel ID field. You can get the default model version by copying it from theVersion field on the model card. To see other model versions that you can use, clickDeploy model and then click theResource ID field.HUGGING_FACE_TOKEN: aSTRINGvalue that specifies the Hugging FaceUser Access Token to use. You can only specify a value for this option if you also specify a value for theHUGGING_FACE_MODEL_IDoption.The token must have the
readrole at minimum but tokens with a broader scope are also acceptable. This option is required when the model identified by theHUGGING_FACE_MODEL_IDvalue is a Hugging Facegated or private model.Some gated models require explicit agreement to their terms of service before access is granted. To agree to these terms, follow these steps:
- Navigate to the model's page on the Hugging Face website.
- Locate and review the model's terms of service. A link to the service agreement is typically found on the model card.
- Accept the terms as prompted on the page.
MACHINE_TYPE: aSTRINGvalue that specifies the machine type to use when deploying the model to Vertex AI. For information about supported machine types, seeMachine types. If you don't specify a value for theMACHINE_TYPEoption, the Vertex AI Model Garden default machine type for the model is used.MIN_REPLICA_COUNT: anINT64value that specifies the minimum number of machine replicas used when deploying the model on a Vertex AI endpoint. The service increases or decreases the replica count as required by the inference load on the endpoint. The number of replicas used is never lower than theMIN_REPLICA_COUNTvalue and never higher than theMAX_REPLICA_COUNTvalue. TheMIN_REPLICA_COUNTvalue must be in the range[1, 4096]. The default value is1.MAX_REPLICA_COUNT: anINT64value that specifies the maximum number of machine replicas used when deploying the model on a Vertex AI endpoint. The service increases or decreases the replica count as required by the inference load on the endpoint. The number of replicas used is never lower than theMIN_REPLICA_COUNTvalue and never higher than theMAX_REPLICA_COUNTvalue. TheMAX_REPLICA_COUNTvalue must be in the range[1, 4096]. The default value is theMIN_REPLICA_COUNTvalue.RESERVATION_AFFINITY_TYPE: determines whether the deployed model usesCompute Engine reservations to provide assured virtual machine (VM) availability when serving predictions, and specifies whether the model uses VMs from all available reservations or just one specific reservation. For more information, seeCompute Engine reservation affinity.You can only use Compute Engine reservations that are shared with Vertex AI. For more information, seeAllow a reservation to be consumed.
Supported values are as follows:
NO_RESERVATION: no reservation is consumed when your model is deployed to a Vertex AI endpoint. SpecifyingNO_RESERVATIONhas the same effect as not specifying a reservation affinity.ANY_RESERVATION: the Vertex AI model deployment consumes virtual machines (VMs) from Compute Engine reservations that are in the current project or that areshared with the project, and that areconfigured for automatic consumption. Only VMs that meet the following qualifications are used:- They use the machine type specified by the
MACHINE_TYPEvalue. - If the BigQuery dataset in which you are creating the remote model is a single region, the reservation must be in the same region. If the dataset is in the
USmultiregion, the reservation must be in theus-central1region. If the dataset is in theEUmultiregion, the reservation must be in theeurope-west4region.
If there isn't enough capacity in the available reservations, or if no suitable reservations are found, the system provisions on-demand Compute Engine VMs to meet the resource requirements.
- They use the machine type specified by the
SPECIFIC_RESERVATION: the Vertex AI model deployment consumes VMs only from the reservation that you specify in theRESERVATION_AFFINITY_VALUESvalue. This reservation must beconfigured for specifically targeted consumption. Deployment fails if the specified reservation doesn't have sufficient capacity.
RESERVATION_AFFINITY_KEY: the stringcompute.googleapis.com/reservation-name. You must specify this option when theRESERVATION_AFFINITY_TYPEvalue isSPECIFIC_RESERVATION.RESERVATION_AFFINITY_VALUES: anARRAY<STRING>value that specifies the full resource name of the Compute Engine reservation, in the following format:projects/myproject/zones/reservation_zone/reservations/reservation_nameFor example,
RESERVATION_AFFINITY_values = ['projects/myProject/zones/us-central1-a/reservations/myReservationName'].You can get the reservation name and zone from theReservations page of the Google Cloud console. For more information, seeView reservations.
You must specify this option when the
RESERVATION_AFFINITY_TYPEvalue isSPECIFIC_RESERVATION.ENDPOINT_IDLE_TTL: anINTERVALvalue that specifies the duration of inactivity after which the open model is automatically undeployed from the Vertex AI endpoint.To enable automatic undeployment, specify aninterval literal value between 390 minutes (6.5 hours) and 7 days. For example, specify
INTERVAL 8 HOURto have the model undeployed after 8 hours of idleness. The default value is 390 minutes (6.5 hours).Model inactivity is defined as the amount of time that has passed since the any of the following operations were performed on the model:
- Running the
CREATE MODELstatement. - Running the
ALTER MODELstatement with theDEPLOY_MODELargument set toTRUE. - Sending an inference request to the model endpoint. For example, by running the
AI.GENERATE_EMBEDDINGorAI.GENERATE_TEXTfunction.
Each of these operations resets the inactivity timer to zero. The reset is triggered at the start of the BigQuery job that performs the operation.
After the model is undeployed, inference requests sent to the model return an error. The BigQuery model object remains unchanged, including model metadata. To use the model for inference again, you must redeploy it by running the
ALTER MODELstatement on the model and setting theDEPLOY_MODELoption toTRUE.- Running the
Deployed open models
In the Google Cloud console, go to theBigQuery page.
Using the SQL editor, create aremote model:
CREATEORREPLACEMODEL`PROJECT_ID.DATASET_ID.MODEL_NAME`REMOTEWITHCONNECTION{DEFAULT|`PROJECT_ID.REGION.CONNECTION_ID`}OPTIONS(ENDPOINT='https://ENDPOINT_REGION-aiplatform.googleapis.com/v1/projects/ENDPOINT_PROJECT_ID/locations/ENDPOINT_REGION/endpoints/ENDPOINT_ID');
Replace the following:
PROJECT_ID: your project ID.DATASET_ID: the ID of the dataset to contain the model. This dataset must be in the samelocation as the connection that you are using.MODEL_NAME: the name of the model.REGION: the region used by the connection.CONNECTION_ID: the ID of your BigQuery connection.You can get this value byviewing the connection details in the Google Cloud console and copying the value in the last section of the fully qualified connection ID that is shown inConnection ID. For example,
projects/myproject/locations/connection_location/connections/myconnection.ENDPOINT_REGION: the region in which the open model is deployed.ENDPOINT_PROJECT_ID: the project in which the open model is deployed.ENDPOINT_ID: the ID of the HTTPS endpoint used by the open model. You can get the endpoint ID by locating the open model on theOnline prediction page and copying the value in theID field.
All other models
In the Google Cloud console, go to theBigQuery page.
Using the SQL editor, create aremote model:
CREATEORREPLACEMODEL`PROJECT_ID.DATASET_ID.MODEL_NAME`REMOTEWITHCONNECTION{DEFAULT|`PROJECT_ID.REGION.CONNECTION_ID`}OPTIONS(ENDPOINT='ENDPOINT');
Replace the following:
PROJECT_ID: your project ID.DATASET_ID: the ID of the dataset to contain the model. This dataset must be in the samelocation as the connection that you are using.MODEL_NAME: the name of the model.REGION: the region used by the connection.CONNECTION_ID: the ID of your BigQuery connection.You can get this value byviewing the connection details in the Google Cloud console and copying the value in the last section of the fully qualified connection ID that is shown inConnection ID. For example,
projects/myproject/locations/connection_location/connections/myconnection.ENDPOINT: the endpoint of the Vertex AI model to use.For pre-trained Vertex AI models, Claude models, and Mistral AI models, specify the name of the model. For some of these models, you can specify a particularversion of the model as part of the name. For supported Gemini models, you can specify theglobal endpoint to improve availability.
For Llama models, specify anOpenAI API endpoint in the format
openapi/<publisher_name>/<model_name>. For example,openapi/meta/llama-3.1-405b-instruct-maas.For information about supported model names and versions, see
ENDPOINT.The Vertex AI model that you specify must be available in the location where you are creating the remote model. For more information, seeLocations.
Generate text from standard table data
Generate text by using theAI.GENERATE_TEXT functionwith prompt data from a standard table:
Gemini
SELECT*FROMAI.GENERATE_TEXT(MODEL`PROJECT_ID.DATASET_ID.MODEL_NAME`,{TABLEPROJECT_ID.DATASET_ID.TABLE_NAME|(PROMPT_QUERY)},STRUCT({{[MAX_OUTPUT_TOKENSASmax_output_tokens][,TOP_PAStop_p][,TEMPERATUREAStemperature][,STOP_SEQUENCESASstop_sequences][,GROUND_WITH_GOOGLE_SEARCHASground_with_google_search][,SAFETY_SETTINGSASsafety_settings]}|[,MODEL_PARAMSASmodel_params]}[,REQUEST_TYPEASrequest_type]));
Replace the following:
PROJECT_ID: your project ID.DATASET_ID: the ID of the dataset that contains the model.MODEL_NAME: the name of the model.TABLE_NAME: the name of the table that contains the prompt. This table must have a column that's namedprompt, or you can use an alias to use a differently named column.PROMPT_QUERY: a query that provides the prompt data. This query must produce a column that's namedprompt.Note:
We recommend against using theLIMIT and OFFSETclause in the prompt query. Using this clause causes the query to process all of the input data first and then applyLIMITandOFFSET.TOKENS: anINT64value that sets the maximum number of tokens that can be generated in the response. This value must be in the range[1,8192]. Specify a lower value for shorter responses and a higher value for longer responses. The default is128.TEMPERATURE: aFLOAT64value in the range[0.0,1.0]that controls the degree of randomness in token selection. The default is0.Lower values for
temperatureare good for prompts that require a more deterministic and less open-ended or creative response, while higher values fortemperaturecan lead to more diverse or creative results. A value of0fortemperatureis deterministic, meaning that the highest probability response is always selected.TOP_P: aFLOAT64value in the range[0.0,1.0]helps determine the probability of the tokens selected. Specify a lower value for less random responses and a higher value for more random responses. The default is0.95.STOP_SEQUENCES: anARRAY<STRING>value that removes the specified strings if they are included in responses from the model. Strings are matched exactly, including capitalization. The default is an empty array.GROUND_WITH_GOOGLE_SEARCH: aBOOLvalue that determines whether the Vertex AI model uses [Grounding with Google Search](/vertex-ai/generative-ai/docs/grounding/overview#ground-public) when generating responses. Grounding lets the model use additional information from the internet when generating a response, in order to make model responses more specific and factual. When this field is set toTrue, an additionalgrounding_resultcolumn is included in the results, providing the sources that the model used to gather additional information. The default isFALSE.SAFETY_SETTINGS: anARRAY<STRUCT<STRING AS category, STRING AS threshold>>value that configures content safety thresholds to filter responses. The first element in the struct specifies a harm category, and the second element in the struct specifies a corresponding blocking threshold. The model filters out content that violate these settings. You can only specify each category once. For example, you can't specify bothSTRUCT('HARM_CATEGORY_DANGEROUS_CONTENT' AS category, 'BLOCK_MEDIUM_AND_ABOVE' AS threshold)andSTRUCT('HARM_CATEGORY_DANGEROUS_CONTENT' AS category, 'BLOCK_ONLY_HIGH' AS threshold). If there is no safety setting for a given category, theBLOCK_MEDIUM_AND_ABOVEsafety setting is used. Supported categories are as follows:HARM_CATEGORY_HATE_SPEECHHARM_CATEGORY_DANGEROUS_CONTENTHARM_CATEGORY_HARASSMENTHARM_CATEGORY_SEXUALLY_EXPLICIT
BLOCK_NONE(Restricted)BLOCK_LOW_AND_ABOVEBLOCK_MEDIUM_AND_ABOVE(Default)BLOCK_ONLY_HIGHHARM_BLOCK_THRESHOLD_UNSPECIFIED
REQUEST_TYPE: aSTRINGvalue that specifies the type of inference request to send to the Gemini model. The request type determines what quota the request uses. Valid values are as follows:DEDICATED: TheAI.GENERATE_TEXTfunction only uses Provisioned Throughput quota. TheAI.GENERATE_TEXTfunction returns the errorProvisioned throughput is not purchased or is not activeif Provisioned Throughput quota isn't available.SHARED: TheAI.GENERATE_TEXTfunction only usesdynamic shared quota (DSQ), even if you have purchased Provisioned Throughput quota.UNSPECIFIED: TheAI.GENERATE_TEXTfunction uses quota as follows:- If you haven't purchased Provisioned Throughput quota, the
AI.GENERATE_TEXTfunction uses DSQ quota. - If you have purchased Provisioned Throughput quota, the
AI.GENERATE_TEXTfunction uses the Provisioned Throughput quota first. If requests exceed the Provisioned Throughput quota, the overflow traffic uses DSQ quota.
- If you haven't purchased Provisioned Throughput quota, the
The default value is
UNSPECIFIED.For more information, seeUse Vertex AI Provisioned Throughput.
MODEL_PARAMS: a JSON-formatted string literal that provides parameters to the model. The value must conform to thegenerateContentrequest body format. You can provide a value for any field in the request body except for thecontents[]field. If you set this field, then you can't also specify any model parameters in the top-level struct argument to theAI.GENERATE_TEXTfunction. You must either specify every model parameter in theMODEL_PARAMSfield, or omit this field and specify each parameter separately.
Example 1
The following example shows a request with these characteristics:
- Prompts for a summary of the text in the
bodycolumn ofthearticlestable.
SELECT*FROMAI.GENERATE_TEXT(MODEL`mydataset.text_model`,(SELECTCONCAT('Summarize this text',body)ASpromptFROMmydataset.articles));
Example 2
The following example shows a request with these characteristics:
- Uses a query to create the prompt data by concatenating strings thatprovide promptprefixes with table columns.
- Returns a short response.
SELECT*FROMAI.GENERATE_TEXT(MODEL`mydataset.text_model`,(SELECTCONCAT(question,'Text:',description,'Category')ASpromptFROMmydataset.input_table),STRUCT(100ASmax_output_tokens));
Example 3
The following example shows a request with these characteristics:
- Uses the
promptcolumn of thepromptstable for the prompt.
SELECT*FROMAI.GENERATE_TEXT(MODEL`mydataset.text_model`,TABLEmydataset.prompts);
Example 4
The following example shows a request with these characteristics:
- Uses the
promptcolumn of thepromptstable for the prompt. - Returns a short response.
- Retrieves and returns public web data for response grounding.
- Filters out unsafe responses by using two safety settings.
SELECT*FROMAI.GENERATE_TEXT(MODEL`mydataset.text_model`,TABLEmydataset.prompts,STRUCT(100ASmax_output_tokens,0.5AStop_p,TRUEASground_with_google_search,[STRUCT('HARM_CATEGORY_HATE_SPEECH'AScategory,'BLOCK_LOW_AND_ABOVE'ASthreshold),STRUCT('HARM_CATEGORY_DANGEROUS_CONTENT'AScategory,'BLOCK_MEDIUM_AND_ABOVE'ASthreshold)]ASsafety_settings));
Example 5
The following example shows a request with these characteristics:
- Uses the
promptcolumn of thepromptstable for the prompt. - Returns a longer response.
SELECT*FROMAI.GENERATE_TEXT(MODEL`mydataset.flash_2_model`,TABLEmydataset.prompts,STRUCT(0.4AStemperature,8192ASmax_output_tokens));
Example 6
The following example shows a request with these characteristics:
- Prompts for a summary of the text in the
bodycolumn ofthearticlestable. - Retrieves and returns public web data for response grounding.
- Filters out unsafe responses by using two safety settings.
SELECT*FROMAI.GENERATE_TEXT(MODEL`mydataset.text_model`,(SELECTCONCAT('Summarize this text',body)ASpromptFROMmydataset.articles),STRUCT(.1ASTEMPERATURE,TRUEASground_with_google_search,[STRUCT('HARM_CATEGORY_HATE_SPEECH'AScategory,'BLOCK_LOW_AND_ABOVE'ASthreshold),STRUCT('HARM_CATEGORY_DANGEROUS_CONTENT'AScategory,'BLOCK_MEDIUM_AND_ABOVE'ASthreshold)]ASsafety_settings));
Claude
SELECT*FROMAI.GENERATE_TEXT(MODEL`PROJECT_ID.DATASET_ID.MODEL_NAME`,{TABLEPROJECT_ID.DATASET_ID.TABLE_NAME|(PROMPT_QUERY)},STRUCT({{[MAX_OUTPUT_TOKENSASmax_output_tokens][,TOP_KAStop_k][,TOP_PAStop_p]}|[,MODEL_PARAMSASmodel_params]}));
Replace the following:
PROJECT_ID: your project ID.DATASET_ID: the ID of the dataset that contains the model.MODEL_NAME: the name of the model.TABLE_NAME: the name of the table that contains the prompt. This table must have a column that's namedprompt, or you can use an alias to use a differently named column.PROMPT_QUERY: a query that provides the prompt data. This query must produce a column that's namedprompt.Note:
We recommend against using theLIMIT and OFFSETclause in the prompt query. Using this clause causes the query to process all of the input data first and then applyLIMITandOFFSET.TOKENS: anINT64value that sets the maximum number of tokens that can be generated in the response. This value must be in the range[1,4096]. Specify a lower value for shorter responses and a higher value for longer responses. The default is128.TOP_K: anINT64value in the range[1,40]that determines the initial pool of tokens the model considers for selection. Specify a lower value for less random responses and a higher value for more random responses. If you don't specify a value, the model determines an appropriate value.TOP_P: aFLOAT64value in the range[0.0,1.0]helps determine the probability of the tokens selected. Specify a lower value for less random responses and a higher value for more random responses. If you don't specify a value, the model determines an appropriate value.MODEL_PARAMS: a JSON-formatted string literal that provides parameters to the model. The value must conform to thegenerateContentrequest body format. You can provide a value for any field in the request body except for thecontents[]field. If you set this field, then you can't also specify any model parameters in the top-level struct argument to theAI.GENERATE_TEXTfunction. You must either specify every model parameter in theMODEL_PARAMSfield, or omit this field and specify each parameter separately.
Example 1
The following example shows a request with these characteristics:
- Prompts for a summary of the text in the
bodycolumn ofthearticlestable.
SELECT*FROMAI.GENERATE_TEXT(MODEL`mydataset.text_model`,(SELECTCONCAT('Summarize this text',body)ASpromptFROMmydataset.articles));
Example 2
The following example shows a request with these characteristics:
- Uses a query to create the prompt data by concatenating strings thatprovide promptprefixes with table columns.
- Returns a short response.
SELECT*FROMAI.GENERATE_TEXT(MODEL`mydataset.text_model`,(SELECTCONCAT(question,'Text:',description,'Category')ASpromptFROMmydataset.input_table),STRUCT(100ASmax_output_tokens));
Example 3
The following example shows a request with these characteristics:
- Uses the
promptcolumn of thepromptstable for the prompt.
SELECT*FROMAI.GENERATE_TEXT(MODEL`mydataset.text_model`,TABLEmydataset.prompts);
Llama
SELECT*FROMAI.GENERATE_TEXT(MODEL`PROJECT_ID.DATASET_ID.MODEL_NAME`,{TABLEPROJECT_ID.DATASET_ID.TABLE_NAME|(PROMPT_QUERY)},STRUCT({{[MAX_OUTPUT_TOKENSASmax_output_tokens][,TOP_PAStop_p][,TEMPERATUREAStemperature][,STOP_SEQUENCESASstop_sequences]|}[,MODEL_PARAMSASmodel_params]}));
Replace the following:
PROJECT_ID: your project ID.DATASET_ID: the ID of the dataset that contains the model.MODEL_NAME: the name of the model.TABLE_NAME: the name of the table that contains the prompt. This table must have a column that's namedprompt, or you can use an alias to use a differently named column.PROMPT_QUERY: a query that provides the prompt data. This query must produce a column that's namedprompt.Note:
We recommend against using theLIMIT and OFFSETclause in the prompt query. Using this clause causes the query to process all of the input data first and then applyLIMITandOFFSET.TOKENS: anINT64value that sets the maximum number of tokens that can be generated in the response. This value must be in the range[1,4096]. Specify a lower value for shorter responses and a higher value for longer responses. The default is128.TEMPERATURE: aFLOAT64value in the range[0.0,1.0]that controls the degree of randomness in token selection. The default is0.Lower values for
temperatureare good for prompts that require a more deterministic and less open-ended or creative response, while higher values fortemperaturecan lead to more diverse or creative results. A value of0fortemperatureis deterministic, meaning that the highest probability response is always selected.TOP_P: aFLOAT64value in the range[0.0,1.0]helps determine the probability of the tokens selected. Specify a lower value for less random responses and a higher value for more random responses. The default is0.95.STOP_SEQUENCES: anARRAY<STRING>value that removes the specified strings if they are included in responses from the model. Strings are matched exactly, including capitalization. The default is an empty array.MODEL_PARAMS: a JSON-formatted string literal that provides parameters to the model. The value must conform to thegenerateContentrequest body format. You can provide a value for any field in the request body except for thecontents[]field. If you set this field, then you can't also specify any model parameters in the top-level struct argument to theAI.GENERATE_TEXTfunction. You must either specify every model parameter in theMODEL_PARAMSfield, or omit this field and specify each parameter separately.
Example 1
The following example shows a request with these characteristics:
- Prompts for a summary of the text in the
bodycolumn ofthearticlestable.
SELECT*FROMAI.GENERATE_TEXT(MODEL`mydataset.text_model`,(SELECTCONCAT('Summarize this text',body)ASpromptFROMmydataset.articles));
Example 2
The following example shows a request with these characteristics:
- Uses a query to create the prompt data by concatenating strings thatprovide promptprefixes with table columns.
- Returns a short response.
SELECT*FROMAI.GENERATE_TEXT(MODEL`mydataset.text_model`,(SELECTCONCAT(question,'Text:',description,'Category')ASpromptFROMmydataset.input_table),STRUCT(100ASmax_output_tokens));
Example 3
The following example shows a request with these characteristics:
- Uses the
promptcolumn of thepromptstable for the prompt.
SELECT*FROMAI.GENERATE_TEXT(MODEL`mydataset.text_model`,TABLEmydataset.prompts);
Mistral AI
SELECT*FROMAI.GENERATE_TEXT(MODEL`PROJECT_ID.DATASET_ID.MODEL_NAME`,{TABLEPROJECT_ID.DATASET_ID.TABLE_NAME|(PROMPT_QUERY)},STRUCT({{[MAX_OUTPUT_TOKENSASmax_output_tokens][,TOP_PAStop_p][,TEMPERATUREAStemperature][,STOP_SEQUENCESASstop_sequences]|}[,MODEL_PARAMSASmodel_params]}));
Replace the following:
PROJECT_ID: your project ID.DATASET_ID: the ID of the dataset that contains the model.MODEL_NAME: the name of the model.TABLE_NAME: the name of the table that contains the prompt. This table must have a column that's namedprompt, or you can use an alias to use a differently named column.PROMPT_QUERY: a query that provides the prompt data. This query must produce a column that's namedprompt.Note:
We recommend against using theLIMIT and OFFSETclause in the prompt query. Using this clause causes the query to process all of the input data first and then applyLIMITandOFFSET.TOKENS: anINT64value that sets the maximum number of tokens that can be generated in the response. This value must be in the range[1,4096]. Specify a lower value for shorter responses and a higher value for longer responses. The default is128.TEMPERATURE: aFLOAT64value in the range[0.0,1.0]that controls the degree of randomness in token selection. The default is0.Lower values for
temperatureare good for prompts that require a more deterministic and less open-ended or creative response, while higher values fortemperaturecan lead to more diverse or creative results. A value of0fortemperatureis deterministic, meaning that the highest probability response is always selected.TOP_P: aFLOAT64value in the range[0.0,1.0]helps determine the probability of the tokens selected. Specify a lower value for less random responses and a higher value for more random responses. The default is0.95.STOP_SEQUENCES: anARRAY<STRING>value that removes the specified strings if they are included in responses from the model. Strings are matched exactly, including capitalization. The default is an empty array.MODEL_PARAMS: a JSON-formatted string literal that provides parameters to the model. The value must conform to thegenerateContentrequest body format. You can provide a value for any field in the request body except for thecontents[]field. If you set this field, then you can't also specify any model parameters in the top-level struct argument to theAI.GENERATE_TEXTfunction. You must either specify every model parameter in theMODEL_PARAMSfield, or omit this field and specify each parameter separately.
Example 1
The following example shows a request with these characteristics:
- Prompts for a summary of the text in the
bodycolumn ofthearticlestable.
SELECT*FROMAI.GENERATE_TEXT(MODEL`mydataset.text_model`,(SELECTCONCAT('Summarize this text',body)ASpromptFROMmydataset.articles));
Example 2
The following example shows a request with these characteristics:
- Uses a query to create the prompt data by concatenating strings thatprovide promptprefixes with table columns.
- Returns a short response.
SELECT*FROMAI.GENERATE_TEXT(MODEL`mydataset.text_model`,(SELECTCONCAT(question,'Text:',description,'Category')ASpromptFROMmydataset.input_table),STRUCT(100ASmax_output_tokens));
Example 3
The following example shows a request with these characteristics:
- Uses the
promptcolumn of thepromptstable for the prompt.
SELECT*FROMAI.GENERATE_TEXT(MODEL`mydataset.text_model`,TABLEmydataset.prompts);
Open models
Note: You must deploy open models in Vertex AI before you can usethem. For more information, seeDeploy open models.SELECT*FROMAI.GENERATE_TEXT(MODEL`PROJECT_ID.DATASET_ID.MODEL_NAME`,{TABLEPROJECT_ID.DATASET_ID.TABLE_NAME|(PROMPT_QUERY)},STRUCT({{[MAX_OUTPUT_TOKENSASmax_output_tokens][,TOP_KAStop_k][,TOP_PAStop_p][,TEMPERATUREAStemperature]}|[,MODEL_PARAMSASmodel_params]}));
Replace the following:
PROJECT_ID: your project ID.DATASET_ID: the ID of the dataset that contains the model.MODEL_NAME: the name of the model.TABLE_NAME: the name of the table that contains the prompt. This table must have a column that's namedprompt, or you can use an alias to use a differently named column.PROMPT_QUERY: a query that provides the prompt data. This query must produce a column that's namedprompt.Note:
We recommend against using theLIMIT and OFFSETclause in the prompt query. Using this clause causes the query to process all of the input data first and then applyLIMITandOFFSET.TOKENS: anINT64value that sets the maximum number of tokens that can be generated in the response. This value must be in the range[1,4096]. Specify a lower value for shorter responses and a higher value for longer responses. If you don't specify a value, the model determines an appropriate value.TEMPERATURE: aFLOAT64value in the range[0.0,1.0]that controls the degree of randomness in token selection. If you don't specify a value, the model determines an appropriate value.Lower values for
temperatureare good for prompts that require a more deterministic and less open-ended or creative response, while higher values fortemperaturecan lead to more diverse or creative results. A value of0fortemperatureis deterministic, meaning that the highest probability response is always selected.TOP_K: anINT64value in the range[1,40]that determines the initial pool of tokens the model considers for selection. Specify a lower value for less random responses and a higher value for more random responses. If you don't specify a value, the model determines an appropriate value.TOP_P: aFLOAT64value in the range[0.0,1.0]helps determine the probability of the tokens selected. Specify a lower value for less random responses and a higher value for more random responses. If you don't specify a value, the model determines an appropriate value.MODEL_PARAMS: a JSON-formatted string literal that provides parameters to the model. The value must conform to thegenerateContentrequest body format. You can provide a value for any field in the request body except for thecontents[]field. If you set this field, then you can't also specify any model parameters in the top-level struct argument to theAI.GENERATE_TEXTfunction. You must either specify every model parameter in theMODEL_PARAMSfield, or omit this field and specify each parameter separately.
Example 1
The following example shows a request with these characteristics:
- Prompts for a summary of the text in the
bodycolumn ofthearticlestable.
SELECT*FROMAI.GENERATE_TEXT(MODEL`mydataset.text_model`,(SELECTCONCAT('Summarize this text',body)ASpromptFROMmydataset.articles));
Example 2
The following example shows a request with these characteristics:
- Uses a query to create the prompt data by concatenating strings thatprovide promptprefixes with table columns.
- Returns a short response.
SELECT*FROMAI.GENERATE_TEXT(MODEL`mydataset.text_model`,(SELECTCONCAT(question,'Text:',description,'Category')ASpromptFROMmydataset.input_table),STRUCT(100ASmax_output_tokens));
Example 3
The following example shows a request with these characteristics:
- Uses the
promptcolumn of thepromptstable for the prompt.
SELECT*FROMAI.GENERATE_TEXT(MODEL`mydataset.text_model`,TABLEmydataset.prompts);
Generate text from object table data
Generate text by using theAI.GENERATE_TEXT functionwith a Gemini model to analyze unstructured data from an objecttable. You provide the prompt data in theprompt parameter.
SELECT*FROMAI.GENERATE_TEXT(MODEL`PROJECT_ID.DATASET.MODEL`,{TABLE`PROJECT_ID.DATASET.TABLE`|(QUERY_STATEMENT)},STRUCT(PROMPTASprompt{{[,MAX_OUTPUT_TOKENSASmax_output_tokens][,TOP_PAStop_p][,TEMPERATUREAStemperature][,STOP_SEQUENCESASstop_sequences][,SAFETY_SETTINGSASsafety_settings]}|[,MODEL_PARAMSASmodel_params]}));
Replace the following:
PROJECT_ID: the project that contains theresource.DATASET: the dataset that contains theresource.MODEL: the name of the remote model over the Vertex AImodel. For more information about how to create this type of remotemodel, seeTheCREATE MODELstatement for remote models over LLMs.You can confirm which model is used by the remote model by opening theGoogle Cloud console and looking at theRemote endpoint field inthe model details page.
Note: Using a remote model based on a Gemini 2.5 model incurscharges for thethinking process.TABLE: the name of theobject tablethat contains the content to analyze. For more information onwhat types of content you can analyze, seeInput.The Cloud Storage bucket used by the input object table must be inthe same project where you have created the model and where you arecalling the
AI.GENERATE_TEXTfunction.QUERY_STATEMENT: the GoogleSQL query thatgenerates the image data. You can only specifyWHEREandORDER BYclauses in the query.PROMPT: aSTRINGvalue that contains the promptto use to analyze the visual content. Thepromptvalue must containless than 16,000 tokens. A token might be smaller than a word and isapproximately four characters. One hundred tokens correspond toapproximately 60-80 words.MAX_OUTPUT_TOKENS: anINT64value that setsthe maximum number of tokens that can be generated in the response.This value must be in the range[1,8192].Specify a lower value for shorter responses and a higher value forlonger responses. The default is1024.TOP_P: aFLOAT64value in the range[0.0,1.0]thatchanges how the model selects tokens for output. Specify a lower value forless random responses and a higher value for more random responses. Thedefault is0.95.Tokens are selected from the most to leastprobable until the sum of their probabilities equals the
TOP_Pvalue.For example, if tokens A, B, and C have a probability of0.3,0.2, and0.1, and theTOP_Pvalue is0.5, then the model selects either A orB as the next token by using theTEMPERATUREvalue and doesn'tconsider C.TEMPERATURE: aFLOAT64value in the range[0.0,1.0]that controls the degree of randomness in token selection.LowerTEMPERATUREvalues are good for prompts that require amore deterministic and less open-ended or creative response, while higherTEMPERATUREvalues can lead to more diverse or creative results. ATEMPERATUREvalue of0is deterministic, meaning that the highestprobability response is always selected. The default is0.STOP_SEQUENCES: anARRAY<STRING>value that removesthe specified strings if they are included in responses from the model.Strings are matched exactly, including capitalization. The default is an emptyarray.SAFETY_SETTINGS: anARRAY<STRUCT<STRING AS category,STRING AS threshold>>value that configures content safety thresholds tofilter responses. The first element in the struct specifies a harm category,and the second element in the struct specifies a corresponding blockingthreshold. The model filters out content that violate these settings. You canonly specify each category once. For example, you can't specify bothSTRUCT('HARM_CATEGORY_DANGEROUS_CONTENT' AS category,'BLOCK_MEDIUM_AND_ABOVE' AS threshold)andSTRUCT('HARM_CATEGORY_DANGEROUS_CONTENT' AS category, 'BLOCK_ONLY_HIGH' ASthreshold). If there is no safety setting for a given category, theBLOCK_MEDIUM_AND_ABOVEsafety setting is used.Supported categories are as follows:
HARM_CATEGORY_HATE_SPEECHHARM_CATEGORY_DANGEROUS_CONTENTHARM_CATEGORY_HARASSMENTHARM_CATEGORY_SEXUALLY_EXPLICIT
Supported thresholds are as follows:
BLOCK_NONE(Restricted)BLOCK_LOW_AND_ABOVEBLOCK_MEDIUM_AND_ABOVE(Default)BLOCK_ONLY_HIGHHARM_BLOCK_THRESHOLD_UNSPECIFIED
For more information, refer to the definition ofsafety categoryandblocking threshold.
MODEL_PARAMS: a JSON-formatted string literalthat provides additional parameters to the model. The value must conformto thegenerateContentrequest bodyformat. You can provide a value for any field in the request body exceptfor thecontents[]field. If you set this field, then you can't alsospecify any model parameters in the top-level struct argument to theAI.GENERATE_TEXTfunction.
Examples
This example translates and transcribes audio content from anobject table that's namedfeedback:
SELECT*FROMAI.GENERATE_TEXT(MODEL`mydataset.audio_model`,TABLE`mydataset.feedback`,STRUCT('What is the content of this audio clip, translated into Spanish?'ASPROMPT));
This example classifies PDF content from an object tablethat's namedinvoices:
SELECT*FROMAI.GENERATE_TEXT(MODEL`mydataset.classify_model`,TABLE`mydataset.invoices`,STRUCT('Classify this document based on the invoice total, using the following categories: 0 to 100, 101 to 200, greater than 200'ASPROMPT));
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-12-15 UTC.