Parse PDFs in a retrieval-augmented generation pipeline

This tutorial guides you through the process of creatinga retrieval-augmented generation (RAG) pipeline based on parsed PDF content.

PDF files, such as financial documents, can be challenging to use in RAGpipelines because of their complex structure and mix of text, figures, andtables. This tutorial shows you how to use BigQuery ML capabilities incombination with Document AI's Layout Parser to build a RAG pipelinebased on key information extracted from a PDF file.

You can alternatively perform this tutorial by using aColab Enterprise notebook.

Objectives

This tutorial covers the following tasks:

  • Creating a Cloud Storage bucket and uploading a sample PDF file.
  • Creating aCloud resource connectionso that you can connect to Cloud Storage and Vertex AIfrom BigQuery.
  • Creating anobject table overthe PDF file to make the PDF file available in BigQuery.
  • Creating a Document AI processorthat you can use to parse the PDF file.
  • Creating aremote modelthat lets you use the Document AI API to access the documentprocessor from BigQuery.
  • Using the remote model with theML.PROCESS_DOCUMENT functionto parse the PDF contents into chunks and then write that content to aBigQuery table.
  • Extracting PDF content from the JSON data returned by theML.PROCESS_DOCUMENT function, and then writing that content to aBigQuery table.
  • Creating aremote modelthat lets you use the Vertex AItext-embedding-004 embeddinggeneration model from BigQuery.
  • Using the remote model with theAI.GENERATE_EMBEDDING functionto generate embeddings from the parsed PDF content, and then writing thoseembeddings to a BigQuery table. Embeddings are numericalrepresentations of the PDF content that enable you to perform semantic searchand retrieval on the PDF content.
  • Using theVECTOR_SEARCH functionon the embeddings to identify semantically similar PDF content.
  • Creating aremote modelthat lets you use a Gemini textgeneration model from BigQuery.
  • Perform retrieval-augmented generation (RAG) by using the remote modelwith theAI.GENERATE_TEXT functionto generate text, using vector search results to augment the prompt input andimprove results.

Costs

In this document, you use the following billable components of Google Cloud:

  • BigQuery: You incur costs for the data that you process in BigQuery.
  • Vertex AI: You incur costs for calls to Vertex AI models.
  • Document AI: You incur costs for calls to the Document AI API.
  • Cloud Storage: You incur costs for object storage in Cloud Storage.

To generate a cost estimate based on your projected usage, use thepricing calculator.

New Google Cloud users might be eligible for afree trial.

For more information, see the following pricing pages:

Before you begin

  1. In the Google Cloud console, on the project selector page, select or create a Google Cloud project.

    Roles required to select or create a project

    • Select a project: Selecting a project doesn't require a specific IAM role—you can select any project that you've been granted a role on.
    • Create a project: To create a project, you need the Project Creator role (roles/resourcemanager.projectCreator), which contains theresourcemanager.projects.create permission.Learn how to grant roles.
    Note: If you don't plan to keep the resources that you create in this procedure, create a project instead of selecting an existing project. After you finish these steps, you can delete the project, removing all resources associated with the project.

    Go to project selector

  2. Verify that billing is enabled for your Google Cloud project.

  3. Enable the BigQuery, BigQuery Connection, Vertex AI, Document AI, and Cloud Storage APIs.

    Roles required to enable APIs

    To enable APIs, you need the Service Usage Admin IAM role (roles/serviceusage.serviceUsageAdmin), which contains theserviceusage.services.enable permission.Learn how to grant roles.

    Enable the APIs

Required roles

To run this tutorial, you need the following Identity and Access Management (IAM)roles:

  • Create Cloud Storage buckets and objects: Storage Admin(roles/storage.storageAdmin)
  • Create a document processor: Document AI Editor(roles/documentai.editor)
  • Create and use BigQuery datasets, connections, and models:BigQuery Admin (roles/bigquery.admin)
  • Grant permissions to the connection's service account: Project IAM Admin(roles/resourcemanager.projectIamAdmin)

These predefined roles contain the permissions required to perform the tasks inthis document. To see the exact permissions that are required, expand theRequired permissions section:

Required permissions

  • Create a dataset:bigquery.datasets.create
  • Create, delegate, and use a connection:bigquery.connections.*
  • Set the default connection:bigquery.config.*
  • Set service account permissions:resourcemanager.projects.getIamPolicy andresourcemanager.projects.setIamPolicy
  • Create an object table:bigquery.tables.create andbigquery.tables.update
  • Create Cloud Storage buckets and objects:storage.buckets.* andstorage.objects.*
  • Create a model and run inference:
    • bigquery.jobs.create
    • bigquery.models.create
    • bigquery.models.getData
    • bigquery.models.updateData
    • bigquery.models.updateMetadata
  • Create a document processor:
    • documentai.processors.create
    • documentai.processors.update
    • documentai.processors.delete

You might also be able to get these permissions withcustom roles or otherpredefined roles.

Create a dataset

Create a BigQuery dataset to store your ML model.

Console

  1. In the Google Cloud console, go to theBigQuery page.

    Go to the BigQuery page

  2. In theExplorer pane, click your project name.

  3. ClickView actions > Create dataset

  4. On theCreate dataset page, do the following:

    • ForDataset ID, enterbqml_tutorial.

    • ForLocation type, selectMulti-region, and then selectUS (multiple regions in United States).

    • Leave the remaining default settings as they are, and clickCreate dataset.

bq

To create a new dataset, use thebq mk commandwith the--location flag. For a full list of possible parameters, see thebq mk --dataset commandreference.

  1. Create a dataset namedbqml_tutorial with the data location set toUSand a description ofBigQuery ML tutorial dataset:

    bq --location=US mk -d \ --description "BigQuery ML tutorial dataset." \ bqml_tutorial

    Instead of using the--dataset flag, the command uses the-d shortcut.If you omit-d and--dataset, the command defaults to creating adataset.

  2. Confirm that the dataset was created:

    bqls

API

Call thedatasets.insertmethod with a defineddataset resource.

{"datasetReference":{"datasetId":"bqml_tutorial"}}

BigQuery DataFrames

Before trying this sample, follow the BigQuery DataFrames setup instructions in theBigQuery quickstart using BigQuery DataFrames. For more information, see theBigQuery DataFrames reference documentation.

To authenticate to BigQuery, set up Application Default Credentials. For more information, seeSet up ADC for a local development environment.

importgoogle.cloud.bigquerybqclient=google.cloud.bigquery.Client()bqclient.create_dataset("bqml_tutorial",exists_ok=True)

Create a connection

Create aCloud resource connectionand get the connection's service account. Create the connection inthe samelocation.

You can skip this step if you either have a default connection configured, oryou have the BigQuery Admin role.

Select one of the following options:

Console

  1. Go to theBigQuery page.

    Go to BigQuery

  2. In the left pane, clickExplorer:

    Highlighted button for the Explorer pane.

    If you don't see the left pane, clickExpand left pane to open the pane.

  3. In theExplorer pane, expand your project name, and then clickConnections.

  4. On theConnections page, clickCreate connection.

  5. ForConnection type, chooseVertex AI remote models, remotefunctions, BigLake and Spanner (Cloud Resource).

  6. In theConnection ID field, enter a name for your connection.

  7. ForLocation type, select a location for your connection. Theconnection should be colocated with your other resources such asdatasets.

  8. ClickCreate connection.

  9. ClickGo to connection.

  10. In theConnection info pane, copy the service account ID for use ina later step.

bq

  1. In a command-line environment, create a connection:

    bqmk--connection--location=REGION--project_id=PROJECT_ID\--connection_type=CLOUD_RESOURCECONNECTION_ID

    The--project_id parameter overrides the default project.

    Replace the following:

    • REGION: yourconnection region
    • PROJECT_ID: your Google Cloud project ID
    • CONNECTION_ID: an ID for yourconnection

    When you create a connection resource, BigQuery creates aunique system service account and associates it with the connection.

    Troubleshooting: If you get the following connection error,update the Google Cloud SDK:

    Flags parsing error: flag --connection_type=CLOUD_RESOURCE: value should be one of...
  2. Retrieve and copy the service account ID for use in a laterstep:

    bqshow--connectionPROJECT_ID.REGION.CONNECTION_ID

    The output is similar to the following:

    name                          properties1234.REGION.CONNECTION_ID     {"serviceAccountId": "connection-1234-9u56h9@gcp-sa-bigquery-condel.iam.gserviceaccount.com"}

Python

Before trying this sample, follow thePython setup instructions in theBigQuery quickstart using client libraries. For more information, see theBigQueryPython API reference documentation.

To authenticate to BigQuery, set up Application Default Credentials. For more information, seeSet up authentication for client libraries.

importgoogle.api_core.exceptionsfromgoogle.cloudimportbigquery_connection_v1client=bigquery_connection_v1.ConnectionServiceClient()defcreate_connection(project_id:str,location:str,connection_id:str,):"""Creates a BigQuery connection to a Cloud Resource.    Cloud Resource connection creates a service account which can then be    granted access to other Google Cloud resources for federated queries.    Args:        project_id: The Google Cloud project ID.        location: The location of the connection (for example, "us-central1").        connection_id: The ID of the connection to create.    """parent=client.common_location_path(project_id,location)connection=bigquery_connection_v1.Connection(friendly_name="Example Connection",description="A sample connection for a Cloud Resource.",cloud_resource=bigquery_connection_v1.CloudResourceProperties(),)try:created_connection=client.create_connection(parent=parent,connection_id=connection_id,connection=connection)print(f"Successfully created connection:{created_connection.name}")print(f"Friendly name:{created_connection.friendly_name}")print(f"Service Account:{created_connection.cloud_resource.service_account_id}")exceptgoogle.api_core.exceptions.AlreadyExists:print(f"Connection with ID '{connection_id}' already exists.")print("Please use a different connection ID.")exceptExceptionase:print(f"An unexpected error occurred while creating the connection:{e}")

Node.js

Before trying this sample, follow theNode.js setup instructions in theBigQuery quickstart using client libraries. For more information, see theBigQueryNode.js API reference documentation.

To authenticate to BigQuery, set up Application Default Credentials. For more information, seeSet up authentication for client libraries.

const{ConnectionServiceClient}=require('@google-cloud/bigquery-connection').v1;const{status}=require('@grpc/grpc-js');constclient=newConnectionServiceClient();/** * Creates a new BigQuery connection to a Cloud Resource. * * A Cloud Resource connection creates a service account that can be granted access * to other Google Cloud resources. * * @param {string} projectId The Google Cloud project ID. for example, 'example-project-id' * @param {string} location The location of the project to create the connection in. for example, 'us-central1' * @param {string} connectionId The ID of the connection to create. for example, 'example-connection-id' */asyncfunctioncreateConnection(projectId,location,connectionId){constparent=client.locationPath(projectId,location);constconnection={friendlyName:'Example Connection',description:'A sample connection for a Cloud Resource',// The service account for this cloudResource will be created by the API.// Its ID will be available in the response.cloudResource:{},};constrequest={parent,connectionId,connection,};try{const[response]=awaitclient.createConnection(request);console.log(`Successfully created connection:${response.name}`);console.log(`Friendly name:${response.friendlyName}`);console.log(`Service Account:${response.cloudResource.serviceAccountId}`);}catch(err){if(err.code===status.ALREADY_EXISTS){console.log(`Connection '${connectionId}' already exists.`);}else{console.error(`Error creating connection:${err.message}`);}}}

Terraform

Use thegoogle_bigquery_connectionresource.

Note: To create BigQuery objects using Terraform, you mustenable theCloud Resource Manager API.

To authenticate to BigQuery, set up Application DefaultCredentials. For more information, seeSet up authentication for client libraries.

The following example creates a Cloud resource connection namedmy_cloud_resource_connection in theUS region:

# This queries the provider for project information.data "google_project" "default" {}# This creates a cloud resource connection in the US region named my_cloud_resource_connection.# Note: The cloud resource nested object has only one output field - serviceAccountId.resource "google_bigquery_connection" "default" {  connection_id = "my_cloud_resource_connection"  project       = data.google_project.default.project_id  location      = "US"  cloud_resource {}}

To apply your Terraform configuration in a Google Cloud project, complete the steps in the following sections.

Prepare Cloud Shell

  1. LaunchCloud Shell.
  2. Set the default Google Cloud project where you want to apply your Terraform configurations.

    You only need to run this command once per project, and you can run it in any directory.

    export GOOGLE_CLOUD_PROJECT=PROJECT_ID

    Environment variables are overridden if you set explicit values in the Terraform configuration file.

Prepare the directory

Each Terraform configuration file must have its own directory (alsocalled aroot module).

  1. InCloud Shell, create a directory and a new file within that directory. The filename must have the.tf extension—for examplemain.tf. In this tutorial, the file is referred to asmain.tf.
    mkdirDIRECTORY && cdDIRECTORY && touch main.tf
  2. If you are following a tutorial, you can copy the sample code in each section or step.

    Copy the sample code into the newly createdmain.tf.

    Optionally, copy the code from GitHub. This is recommended when the Terraform snippet is part of an end-to-end solution.

  3. Review and modify the sample parameters to apply to your environment.
  4. Save your changes.
  5. Initialize Terraform. You only need to do this once per directory.
    terraform init

    Optionally, to use the latest Google provider version, include the-upgrade option:

    terraform init -upgrade

Apply the changes

  1. Review the configuration and verify that the resources that Terraform is going to create or update match your expectations:
    terraform plan

    Make corrections to the configuration as necessary.

  2. Apply the Terraform configuration by running the following command and enteringyes at the prompt:
    terraform apply

    Wait until Terraform displays the "Apply complete!" message.

  3. Open your Google Cloud project to view the results. In the Google Cloud console, navigate to your resources in the UI to make sure that Terraform has created or updated them.
Note: Terraform samples typically assume that the required APIs are enabled in your Google Cloud project.

Grant access to the service account

Select one of the following options:

Console

  1. Go to theIAM & Admin page.

    Go to IAM & Admin

  2. ClickGrant Access.

    TheAdd principals dialog opens.

  3. In theNew principals field, enter the service account ID that youcopied earlier.

  4. In theSelect a role field, selectDocument AI, and thenselectDocument AI Viewer.

  5. ClickAdd another role.

  6. In theSelect a role field, selectCloud Storage, and thenselectStorage Object Viewer.

  7. ClickAdd another role.

  8. In theSelect a role field, selectVertex AI, and thenselectVertex AI User.

  9. ClickSave.

gcloud

Use thegcloud projects add-iam-policy-binding command:

gcloud projects add-iam-policy-binding 'PROJECT_NUMBER' --member='serviceAccount:MEMBER' --role='roles/documentai.viewer' --condition=Nonegcloud projects add-iam-policy-binding 'PROJECT_NUMBER' --member='serviceAccount:MEMBER' --role='roles/storage.objectViewer' --condition=Nonegcloud projects add-iam-policy-binding 'PROJECT_NUMBER' --member='serviceAccount:MEMBER' --role='roles/aiplatform.user' --condition=None

Replace the following:

  • PROJECT_NUMBER: your project number.
  • MEMBER: the service account ID that you copied earlier.

Upload the sample PDF to Cloud Storage

To upload the sample PDF to Cloud Storage, follow these steps:

  1. Download thescf23.pdf sample PDF by going tohttps://www.federalreserve.gov/publications/files/scf23.pdfand clicking download.
  2. Create a Cloud Storage bucket.
  3. Upload thescf23.pdf file to the bucket.

Create an object table

Create an object table over the PDF file in Cloud Storage:

  1. In the Google Cloud console, go to theBigQuery page.

    Go to BigQuery

  2. In the query editor, run the following statement:

    CREATEORREPLACEEXTERNALTABLE`bqml_tutorial.pdf`WITHCONNECTION`LOCATION.CONNECTION_ID`OPTIONS(object_metadata='SIMPLE',uris=['gs://BUCKET/scf23.pdf']);

    Replace the following:

    • LOCATION: the connection location.
    • CONNECTION_ID: the ID of yourBigQuery connection.

      When youview the connection details in the Google Cloud console, theCONNECTION_ID is the value in the last section of the fully qualified connection ID that is shown inConnection ID, for exampleprojects/myproject/locations/connection_location/connections/myconnection.

    • BUCKET: the Cloud Storage bucket containing thescf23.pdffile. The fulluri option value should look similar to['gs://mybucket/scf23.pdf'].

Create a document processor

Create a document processorbased on theLayout Parser processorin theus multi-region.

Create the remote model for the document processor

Create a remote model to access the Document AI processor:

  1. In the Google Cloud console, go to theBigQuery page.

    Go to BigQuery

  2. In the query editor, run the following statement:

    CREATEORREPLACEMODEL`bqml_tutorial.parser_model`REMOTEWITHCONNECTION`LOCATION.CONNECTION_ID`OPTIONS(remote_service_type='CLOUD_AI_DOCUMENT_V1',document_processor='PROCESSOR_ID');

    Replace the following:

    • LOCATION: the connection location.
    • CONNECTION_ID: the ID of yourBigQuery connection.

      When youview the connection details in the Google Cloud console, theCONNECTION_ID is the value in the last section of the fully qualified connection ID that is shown inConnection ID, for exampleprojects/myproject/locations/connection_location/connections/myconnection.

    • PROCESSOR_ID: the document processor ID. To findthis value,view the processor details,and then look at theID row in theBasic Information section.

Parse the PDF file into chunks

Use the document processor with theML.PROCESS_DOCUMENT function to parse thePDF file into chunks, and then write that content to a table. TheML.PROCESS_DOCUMENT function returns the PDF chunks in JSON format.

  1. In the Google Cloud console, go to theBigQuery page.

    Go to BigQuery

  2. In the query editor, run the following statement:

    CREATEorREPLACETABLEbqml_tutorial.chunked_pdfAS(SELECT*FROMML.PROCESS_DOCUMENT(MODELbqml_tutorial.parser_model,TABLEbqml_tutorial.pdf,PROCESS_OPTIONS=>(JSON'{"layout_config": {"chunking_config": {"chunk_size": 250}}}')));

Parse the PDF chunk data into separate columns

Extract the PDF content and metadata information from the JSON data returnedby theML.PROCESS_DOCUMENT function, and then write that content to atable:

  1. In the Google Cloud console, go to theBigQuery page.

    Go to BigQuery

  2. In the query editor, run the following statement to parse the PDF content:

    CREATEORREPLACETABLEbqml_tutorial.parsed_pdfAS(SELECTuri,JSON_EXTRACT_SCALAR(json,'$.chunkId')ASid,JSON_EXTRACT_SCALAR(json,'$.content')AScontent,JSON_EXTRACT_SCALAR(json,'$.pageFooters[0].text')ASpage_footers_text,JSON_EXTRACT_SCALAR(json,'$.pageSpan.pageStart')ASpage_span_start,JSON_EXTRACT_SCALAR(json,'$.pageSpan.pageEnd')ASpage_span_endFROMbqml_tutorial.chunked_pdf,UNNEST(JSON_EXTRACT_ARRAY(ml_process_document_result.chunkedDocument.chunks,'$'))json);

  3. In the query editor, run the following statement to view a subset of theparsed PDF content:

    SELECT*FROM`bqml_tutorial.parsed_pdf`ORDERBYidLIMIT5;

    The output is similar to the following:

    +-----------------------------------+------+------------------------------------------------------------------------------------------------------+-------------------+-----------------+---------------+|                uri                |  id  |                                                 content                                              | page_footers_text | page_span_start | page_span_end |+-----------------------------------+------+------------------------------------------------------------------------------------------------------+-------------------+-----------------+---------------+| gs://mybucket/scf23.pdf           | c1   | •BOARD OF OF FEDERAL GOVERN NOR RESERVE SYSTEM RESEARCH & ANALYSIS                                   | NULL              | 1               | 1             || gs://mybucket/scf23.pdf           | c10  | • In 2022, 20 percent of all families, 14 percent of families in the bottom half of the usual ...    | NULL              | 8               | 9             || gs://mybucket/scf23.pdf           | c100 | The SCF asks multiple questions intended to capture whether families are credit constrained, ...     | NULL              | 48              | 48            || gs://mybucket/scf23.pdf           | c101 | Bankruptcy behavior over the past five years is based on a series of retrospective questions ...     | NULL              | 48              | 48            || gs://mybucket/scf23.pdf           | c102 | # Percentiles of the Distributions of Income and Net Worth                                           | NULL              | 48              | 49            |+-----------------------------------+------+------------------------------------------------------------------------------------------------------+-------------------+-----------------+---------------+

Create the remote model for embedding generation

Create a remote model that represents a hosted Vertex AItext embedding generation model:

  1. In the Google Cloud console, go to theBigQuery page.

    Go to BigQuery

  2. In the query editor, run the following statement:

    CREATEORREPLACEMODEL`bqml_tutorial.embedding_model`REMOTEWITHCONNECTION`LOCATION.CONNECTION_ID`OPTIONS(ENDPOINT='text-embedding-005');

    Replace the following:

    • LOCATION: the connection location.
    • CONNECTION_ID: the ID of yourBigQuery connection.

      When youview the connection details in the Google Cloud console, theCONNECTION_ID is the value in the last section of the fully qualified connection ID that is shown inConnection ID, for exampleprojects/myproject/locations/connection_location/connections/myconnection.

Generate embeddings

Generate embeddings for the parsed PDF content and then write them to a table:

  1. In the Google Cloud console, go to theBigQuery page.

    Go to BigQuery

  2. In the query editor, run the following statement:

    CREATEORREPLACETABLE`bqml_tutorial.embeddings`ASSELECT*FROMAI.GENERATE_EMBEDDING(MODEL`bqml_tutorial.embedding_model`,TABLE`bqml_tutorial.parsed_pdf`);

Run a vector search

Run a vector search against the parsed PDF content.

The following query takes text input, creates an embedding for that inputusing theAI.GENERATE_EMBEDDING function, and then uses theVECTOR_SEARCHfunction to match the input embedding with the most similar PDF contentembeddings. The results are the top ten PDF chunks that are most semanticallysimilar to the input.

  1. Go to theBigQuery page.

    Go to BigQuery

  2. In the query editor, run the following SQL statement:

    SELECTquery.query,base.idASpdf_chunk_id,base.content,distanceFROMVECTOR_SEARCH(TABLE`bqml_tutorial.embeddings`,'embedding',(SELECTembedding,contentASqueryFROMAI.GENERATE_EMBEDDING(MODEL`bqml_tutorial.embedding_model`,(SELECT'Did the typical family net worth increase? If so, by how much?'AScontent))),top_k=>10,OPTIONS=>'{"fraction_lists_to_search": 0.01}')ORDERBYdistanceDESC;

    The output is similar to the following:

    +-------------------------------------------------+--------------+------------------------------------------------------------------------------------------------------+---------------------+|                query                            | pdf_chunk_id |                                                 content                                              | distance            |+-------------------------------------------------+--------------+------------------------------------------------------------------------------------------------------+---------------------+| Did the typical family net worth increase? ,... | c9           | ## Assets                                                                                            | 0.31113668174119469 ||                                                 |              |                                                                                                      |                     ||                                                 |              | The homeownership rate increased slightly between 2019 and 2022, to 66.1 percent. For ...            |                     |+-------------------------------------------------+--------------+------------------------------------------------------------------------------------------------------+---------------------+| Did the typical family net worth increase? ,... | c50          | # Box 3. Net Housing Wealth and Housing Affordability                                                | 0.30973592073929113 ||                                                 |              |                                                                                                      |                     ||                                                 |              | For families that own their primary residence ...                                                    |                     |+-------------------------------------------------+--------------+------------------------------------------------------------------------------------------------------+---------------------+| Did the typical family net worth increase? ,... | c50          | 3 In the 2019 SCF, a small portion of the data collection overlapped with early months of            | 0.29270064592817646 ||                                                 |              | the COVID- ...                                                                                       |                     |+-------------------------------------------------+--------------+------------------------------------------------------------------------------------------------------+---------------------+

Create the remote model for text generation

Create a remote model that represents a hosted Vertex AItext generation model:

  1. In the Google Cloud console, go to theBigQuery page.

    Go to BigQuery

  2. In the query editor, run the following statement:

    CREATEORREPLACEMODEL`bqml_tutorial.text_model`REMOTEWITHCONNECTION`LOCATION.CONNECTION_ID`OPTIONS(ENDPOINT='gemini-2.0-flash-001');

    Replace the following:

    • LOCATION: the connection location.
    • CONNECTION_ID: the ID of yourBigQuery connection.

      When youview the connection details in the Google Cloud console, theCONNECTION_ID is the value in the last section of the fully qualified connection ID that is shown inConnection ID, for exampleprojects/myproject/locations/connection_location/connections/myconnection.

Generate text augmented by vector search results

Perform a vector search on the embeddings to identify semantically similarPDF content, and then use theAI.GENERATE_TEXT function with the vectorsearch results to augment the prompt input and improve the text generationresults. In this case, the query uses information from the PDF chunks to answera question about the change in family net worth over the past decade.

  1. In the Google Cloud console, go to theBigQuery page.

    Go to BigQuery

  2. In the query editor, run the following statement:

    SELECTresultASgeneratedFROMAI.GENERATE_TEXT(MODEL`bqml_tutorial.text_model`,(SELECTCONCAT('Did the typical family net worth change? How does this compare the SCF survey a decade earlier? Be concise and use the following context:',STRING_AGG(FORMAT("context: %s and reference: %s",base.content,base.uri),',\n'))ASprompt,FROMVECTOR_SEARCH(TABLE`bqml_tutorial.embeddings`,'embedding',(SELECTembedding,contentASqueryFROMAI.GENERATE_EMBEDDING(MODEL`bqml_tutorial.embedding_model`,(SELECT'Did the typical family net worth change? How does this compare the SCF survey a decade earlier?'AScontent))),top_k=>10,OPTIONS=>'{"fraction_lists_to_search": 0.01}')),STRUCT(512ASmax_output_tokens));

    The output is similar to the following:

    +-------------------------------------------------------------------------------+|               generated                                                       |+-------------------------------------------------------------------------------+| Between the 2019 and 2022 Survey of Consumer Finances (SCF), real median      || family net worth surged 37 percent to $192,900, and real mean net worth       || increased 23 percent to $1,063,700.  This represents the largest three-year   || increase in median net worth in the history of the modern SCF, exceeding the  || next largest by more than double.  In contrast, between 2010 and 2013, real   || median net worth decreased 2 percent, and real mean net worth remained        || unchanged.                                                                    |+-------------------------------------------------------------------------------+

Clean up

    Caution: Deleting a project has the following effects:
    • Everything in the project is deleted. If you used an existing project for the tasks in this document, when you delete it, you also delete any other work you've done in the project.
    • Custom project IDs are lost. When you created this project, you might have created a custom project ID that you want to use in the future. To preserve the URLs that use the project ID, such as anappspot.com URL, delete selected resources inside the project instead of deleting the whole project.

    If you plan to explore multiple architectures, tutorials, or quickstarts, reusing projects can help you avoid exceeding project quota limits.

  1. In the Google Cloud console, go to theManage resources page.

    Go to Manage resources

  2. In the project list, select the project that you want to delete, and then clickDelete.
  3. In the dialog, type the project ID, and then clickShut down to delete the project.

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2025-12-16 UTC.