Run inference on image object tables
Note: This feature may not be available when using reservations that are created with certain BigQuery editions. For more information about which features are enabled in each edition, seeIntroduction to BigQuery editions.This document describes how to use BigQuery ML to run inferenceon imageobject tables.
You can run inference on image data by using an object table as input to theML.PREDICT function.
To do this, you must first choose an appropriate model, upload it toCloud Storage, and import it into BigQuery by running theCREATE MODEL statement.You can either create your own model, or download onefromTensorFlow Hub.
Limitations
- Using BigQuery MLimported modelswith object tables is only supported when you use capacity-based pricingthrough reservations; on-demand pricing isn't supported.
- The image files associated with the object table must meet the followingrequirements:
- Are less than 20 MB in size.
- Have a format of JPEG, PNG or BMP.
- The combined size of the image files associated with the object table mustbe less than 1 TB.
The model must be one of following:
- ATensorFloworTensorFlow Lite model inSavedModel format.
- A PyTorch model inONNX format.
The model must meet the input requirements and limitations described in the
CREATE MODELstatement for importing TensorFlow models.The serialized size of the model must be less than 450 MB.
The deserialized (in-memory) size of the model must be less than1000 MB.
The model input tensor must meet the following criteria:
- Have a data type of
tf.float32with values in[0, 1)or havea data type oftf.uint8with values in[0, 255). - Have the shape
[batch_size, weight, height, 3], where:batch_sizemust be-1,None, or1.widthandheightmust be greater than 0.
- Have a data type of
The model must be trained with images in one of the following color spaces:
RGBHSVYIQYUVGRAYSCALE
You can use the
ML.CONVERT_COLOR_SPACEfunctionto convert input images to the color space that the model was trained with.
Example models
The following models on TensorFlow Hub work withBigQuery ML and image object tables:
- ResNet 50. To tryusing this model, seeTutorial: Run inference on an object table by using a classification model.
- MobileNet V3.To try using this model, seeTutorial: Run inference on an object table by using a feature vector model.
Required permissions
- To upload the model to Cloud Storage, you need the
storage.objects.createandstorage.objects.getpermissions. To load the model into BigQuery ML, you need the followingpermissions:
bigquery.jobs.createbigquery.models.createbigquery.models.getDatabigquery.models.updateData
To run inference, you need the following permissions:
bigquery.tables.getDataon the object tablebigquery.models.getDataon the modelbigquery.jobs.create
Before you begin
- Sign in to your Google Cloud account. If you're new to Google Cloud, create an account to evaluate how our products perform in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads.
In the Google Cloud console, on the project selector page, select or create a Google Cloud project.
Note: If you don't plan to keep the resources that you create in this procedure, create a project instead of selecting an existing project. After you finish these steps, you can delete the project, removing all resources associated with the project.Roles required to select or create a project
- Select a project: Selecting a project doesn't require a specific IAM role—you can select any project that you've been granted a role on.
- Create a project: To create a project, you need the Project Creator role (
roles/resourcemanager.projectCreator), which contains theresourcemanager.projects.createpermission.Learn how to grant roles.
Verify that billing is enabled for your Google Cloud project.
Enable the BigQuery and BigQuery Connection API APIs.
Roles required to enable APIs
To enable APIs, you need the Service Usage Admin IAM role (
roles/serviceusage.serviceUsageAdmin), which contains theserviceusage.services.enablepermission.Learn how to grant roles.In the Google Cloud console, on the project selector page, select or create a Google Cloud project.
Note: If you don't plan to keep the resources that you create in this procedure, create a project instead of selecting an existing project. After you finish these steps, you can delete the project, removing all resources associated with the project.Roles required to select or create a project
- Select a project: Selecting a project doesn't require a specific IAM role—you can select any project that you've been granted a role on.
- Create a project: To create a project, you need the Project Creator role (
roles/resourcemanager.projectCreator), which contains theresourcemanager.projects.createpermission.Learn how to grant roles.
Verify that billing is enabled for your Google Cloud project.
Enable the BigQuery and BigQuery Connection API APIs.
Roles required to enable APIs
To enable APIs, you need the Service Usage Admin IAM role (
roles/serviceusage.serviceUsageAdmin), which contains theserviceusage.services.enablepermission.Learn how to grant roles.
Upload a model to Cloud Storage
Follow these steps to upload a model:
- If you have created your own model, save it locally. If youare using a model from TensorFlow Hub, download it to yourlocal machine. If you are using TensorFlow, this should giveyou a
saved_model.pbfile and avariablesfolder for the model. - If necessary,create a Cloud Storage bucket.
- Upload the model artifacts to the bucket.
Load the model into BigQuery ML
Loading a model that works with image object tables is the same as loading amodel that works with structured data. Follow these steps to load a model intoBigQuery ML:
CREATEMODEL`PROJECT_ID.DATASET_ID.MODEL_NAME`OPTIONS(model_type='MODEL_TYPE',model_path='BUCKET_PATH');
Replace the following:
PROJECT_ID: your project ID.DATASET_ID: the ID of the dataset to contain the model.MODEL_NAME: the name of the model.MODEL_TYPE: use one of the following values:TENSORFLOWfor a TensorFlow modelONNXfor a PyTorch model in ONNX format
BUCKET_PATH: the path to the Cloud Storage bucket that contains the model, in the format[gs://bucket_name/[folder_name/]*].
The following example uses the default project and loads a TensorFlowmodel to BigQuery ML asmy_vision_model, using thesaved_model.pb file andvariables folder fromgs://my_bucket/my_model_folder:
CREATEMODEL`my_dataset.my_vision_model`OPTIONS(model_type='TENSORFLOW',model_path='gs://my_bucket/my_model_folder/*');
Inspect the model
You can inspect the uploaded model to see what its input and output fieldsare. You need to reference these fields when you run inference on the objecttable.
Follow these steps to inspect a model:
Go to theBigQuery page.
In the left pane, clickExplorer:

If you don't see the left pane, clickExpand left pane to open the pane.
In theExplorer pane, expand your project and clickDatasets.
Click the dataset that contains your model.
Click theModels tab.
In the model pane that opens, click theSchema tab.
Look at theLabels section. This identifies the fields that are outputby the model.
Look at theFeatures section. This identifies the fields that mustbe input into the model. You reference them in the
SELECTstatementfor theML.DECODE_IMAGEfunction.
For more detailed inspection of a TensorFlow model, for example todetermine the shape of the model input,install TensorFlowand use thesaved_model_cli show command.
Preprocess images
You must use theML.DECODE_IMAGE functionto convert image bytes to a multi-dimensionalARRAY representation. You canuseML.DECODE_IMAGE output directly in anML.PREDICT function,or you can write the results fromML.DECODE_IMAGE to a table column andreference that column when you callML.PREDICT.
The following example writes the output of theML.DECODE_IMAGE function toa table:
CREATEORREPLACETABLEmydataset.mytableAS(SELECTML.DECODE_IMAGE(data)ASdecoded_imageFROMmydataset.object_table);
Use the following functions to further process images so that they work withyour model:
- The
ML.CONVERT_COLOR_SPACEfunctionconverts images with anRGBcolor space to a different color space. - The
ML.CONVERT_IMAGE_TYPEfunctionconverts the pixel values output by theML.DECODE_IMAGEfunctionfrom floating point numbers to integers with a range of[0, 255). - The
ML.RESIZE_IMAGEfunctionresizes images.
You can use these as part of theML.PREDICT function, or run them on atable column containing image data output byML.DECODE_IMAGE.
Run inference
Once you have an appropriate model loaded, and optionally preprocessedthe image data,you can run inference on the image data.
To run inference:
SELECT*FROMML.PREDICT(MODEL`PROJECT_ID.DATASET_ID.MODEL_NAME`,(SELECT[othercolumnsfromtheobjecttable,]IMAGE_DATAASMODEL_INPUTFROMPROJECT_ID.DATASET_ID.TABLE_NAME));
Replace the following:
PROJECT_ID: the project ID of the project that contains the model and object table.DATASET_ID: the ID of the dataset that contains the model and object table.MODEL_NAME: the name of the model.IMAGE_DATA: the image data, represented either by the output of theML.DECODE_IMAGEfunction, or by a table column containing image data output byML.DECODE_IMAGEor other image processing functions.MODEL_INPUT: the name of an input field for the model.You can find this information byinspecting the model and looking at the field names in theFeatures section.TABLE_NAME: the name of the object table.
Examples
Example 1
The following example uses theML.DECODE_IMAGE function directly in theML.PREDICT function. It returns the inference results for all images in theobject table, for a model with an input field ofinput and an outputfield offeature:
SELECT*FROMML.PREDICT(MODEL`my_dataset.vision_model`,(SELECTuri,ML.RESIZE_IMAGE(ML.DECODE_IMAGE(data),480,480,FALSE)ASinputFROM`my_dataset.object_table`));
Example 2
The following example uses theML.DECODE_IMAGE function directly in theML.PREDICT function, and uses theML.CONVERT_COLOR_SPACE function in theML.PREDICT function to convertthe image color space fromRBG toYIQ. It also shows how touse object table fields to filter the objects included in inference.It returns the inference results for all JPG images in theobject table, for a model with an input field ofinput and an outputfield offeature:
SELECT*FROMML.PREDICT(MODEL`my_dataset.vision_model`,(SELECTuri,ML.CONVERT_COLOR_SPACE(ML.RESIZE_IMAGE(ML.DECODE_IMAGE(data),224,280,TRUE),'YIQ')ASinputFROM`my_dataset.object_table`WHEREcontent_type='image/jpeg'));
Example 3
The following example uses results fromML.DECODE_IMAGE that have beenwritten to a table column but not processed any further. It usesML.RESIZE_IMAGE andML.CONVERT_IMAGE_TYPE in theML.PREDICT function toprocess the image data. It returns the inference results for all images in thedecoded images table, for a model with an input field ofinput and an outputfield offeature.
Create the decoded images table:
CREATEORREPLACETABLE`my_dataset.decoded_images`AS(SELECTML.DECODE_IMAGE(data)ASdecoded_imageFROM`my_dataset.object_table`);
Run inference on the decoded images table:
SELECT*FROMML.PREDICT(MODEL`my_dataset.vision_model`,(SELECTuri,ML.CONVERT_IMAGE_TYPE(ML.RESIZE_IMAGE(decoded_image,480,480,FALSE))ASinputFROM`my_dataset.decoded_images`));
Example 4
The following example uses results fromML.DECODE_IMAGE that have beenwritten to a table column and preprocessed usingML.RESIZE_IMAGE. It returns the inference results for all images in thedecoded images table, for a model with an input field ofinput and an outputfield offeature.
Create the table:
CREATEORREPLACETABLE`my_dataset.decoded_images`AS(SELECTML.RESIZE_IMAGE(ML.DECODE_IMAGE(data)480,480,FALSE)ASdecoded_imageFROM`my_dataset.object_table`);
Run inference on the decoded images table:
SELECT*FROMML.PREDICT(MODEL`my_dataset.vision_model`,(SELECTuri,decoded_imageASinputFROM`my_dataset.decoded_images`));
Example 5
The following example uses theML.DECODE_IMAGE function directly in theML.PREDICT function. In this example, the model has an output field ofembeddings and two input fields: one that expects animage,f_img, and one that expects a string,f_txt. The imageinput comes from the object table and the string input comes from astandard BigQuery table that is joined with the object tableby using theuri column.
SELECT*FROMML.PREDICT(MODEL`my_dataset.mixed_model`,(SELECTuri,ML.RESIZE_IMAGE(ML.DECODE_IMAGE(my_dataset.my_object_table.data),224,224,FALSE)ASf_img,my_dataset.image_description.descriptionASf_txtFROM`my_dataset.object_table`JOIN`my_dataset.image_description`ONobject_table.uri=image_description.uri));
What's next
- Learn how toanalyze object tables by using remote functions.
- Tryrunning inference on an object table by using a feature vector model.
- Tryrunning inference on an object table by using a classification model.
- Tryanalyzing an object table by using a remote function.
- Tryannotating an image with the
ML.ANNOTATE_IMAGEfunction.
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2026-02-18 UTC.