Package types (2.17.0) Stay organized with collections Save and categorize content based on your preferences.
API documentation forautoml_v1.types package.
Classes
AnnotationPayload
Contains annotation information that is relevant to AutoML.
This message hasoneof_ fields (mutually exclusive fields).For each oneof, at most one member field can be set at the same time.Setting any member of the oneof automatically clears all othermembers.
.. _oneof:https://proto-plus-python.readthedocs.io/en/stable/fields.html#oneofs-mutually-exclusive-fields
AnnotationSpec
A definition of an annotation spec.
BatchPredictInputConfig
Input configuration for BatchPredict Action.
The format of input depends on the ML problem of the model used forprediction. As input source thegcs_source isexpected, unless specified otherwise.
The formats are represented in EBNF with commas being literal andwith non-terminal symbols defined near the end of this comment. Theformats are:
AutoML Vision:
Classification:
One or more CSV files where each line is a single column:
::
GCS_FILE_PATHThe Google Cloud Storage location of an image of up to 30MB in size.Supported extensions: .JPEG, .GIF, .PNG. This path is treated as theID in the batch predict output.
Sample rows:
::
gs://folder/image1.jpeggs://folder/image2.gifgs://folder/image3.pngObject Detection:
One or more CSV files where each line is a single column:
::
GCS_FILE_PATHThe Google Cloud Storage location of an image of up to 30MB in size.Supported extensions: .JPEG, .GIF, .PNG. This path is treated as theID in the batch predict output.
Sample rows:
::
gs://folder/image1.jpeggs://folder/image2.gifgs://folder/image3.pngAutoML Video Intelligence:
Classification:
One or more CSV files where each line is a single column:
::
GCS_FILE_PATH,TIME_SEGMENT_START,TIME_SEGMENT_ENDGCS_FILE_PATH is the Google Cloud Storage location of video upto 50GB in size and up to 3h in duration duration. Supportedextensions: .MOV, .MPEG4, .MP4, .AVI.
TIME_SEGMENT_START andTIME_SEGMENT_END must be within thelength of the video, and the end time must be after the start time.
Sample rows:
::
gs://folder/video1.mp4,10,40gs://folder/video1.mp4,20,60gs://folder/vid2.mov,0,infObject Tracking:
One or more CSV files where each line is a single column:
::
GCS_FILE_PATH,TIME_SEGMENT_START,TIME_SEGMENT_ENDGCS_FILE_PATH is the Google Cloud Storage location of video upto 50GB in size and up to 3h in duration duration. Supportedextensions: .MOV, .MPEG4, .MP4, .AVI.
TIME_SEGMENT_START andTIME_SEGMENT_END must be within thelength of the video, and the end time must be after the start time.
Sample rows:
::
gs://folder/video1.mp4,10,40gs://folder/video1.mp4,20,60gs://folder/vid2.mov,0,infAutoML Natural Language:
Classification:
One or more CSV files where each line is a single column:
::
GCS_FILE_PATHGCS_FILE_PATH is the Google Cloud Storage location of a textfile. Supported file extensions: .TXT, .PDF, .TIF, .TIFF
Text files can be no larger than 10MB in size.
Sample rows:
::
gs://folder/text1.txtgs://folder/text2.pdfgs://folder/text3.tifSentiment Analysis:
One or more CSV files where each line is a single column:
::
GCS_FILE_PATHGCS_FILE_PATH is the Google Cloud Storage location of a textfile. Supported file extensions: .TXT, .PDF, .TIF, .TIFF
Text files can be no larger than 128kB in size.
Sample rows:
::
gs://folder/text1.txtgs://folder/text2.pdfgs://folder/text3.tifEntity Extraction:
One or more JSONL (JSON Lines) files that either provide inline textor documents. You can only use one format, either inline text ordocuments, for a single call to [AutoMl.BatchPredict].
Each JSONL file contains a per line a proto that wraps a temporaryuser-assigned TextSnippet ID (string up to 2000 characters long)called "id", a TextSnippet proto (in JSON representation) and zeroor more TextFeature protos. Any given text snippet content must have30,000 characters or less, and also be UTF-8 NFC encoded (ASCIIalready is). The IDs provided should be unique.
Each document JSONL file contains, per line, a proto that wraps aDocument proto withinput_config set. Each document cannotexceed 2MB in size.
Supported document extensions: .PDF, .TIF, .TIFF
Each JSONL file must not exceed 100MB in size, and no more than 20JSONL files may be passed.
Sample inline JSONL file (Shown with artificial line breaks. Actualline breaks are denoted by "\n".):
::
{ "id": "my_first_id", "text_snippet": { "content": "dog car cat"}, "text_features": [ { "text_segment": {"start_offset": 4, "end_offset": 6}, "structural_type": PARAGRAPH, "bounding_poly": { "normalized_vertices": [ {"x": 0.1, "y": 0.1}, {"x": 0.1, "y": 0.3}, {"x": 0.3, "y": 0.3}, {"x": 0.3, "y": 0.1}, ] }, } ], }\n { "id": "2", "text_snippet": { "content": "Extended sample content", "mime_type": "text/plain" } }Sample document JSONL file (Shown with artificial line breaks.Actual line breaks are denoted by "\n".):
::
{ "document": { "input_config": { "gcs_source": { "input_uris": [ "gs://folder/document1.pdf" ] } } } }\n { "document": { "input_config": { "gcs_source": { "input_uris": [ "gs://folder/document2.tif" ] } } } }AutoML Tables:
SeePreparing your trainingdata <https://cloud.google.com/automl-tables/docs/predict-batch>__for more information.
You can use eithergcs_sourceorbigquery_source][BatchPredictInputConfig.bigquery_source].
For gcs_source:
CSV file(s), each by itself 10GB or smaller and total size must be100GB or smaller, where first file must have a header containingcolumn names. If the first row of a subsequent file is the same asthe header, then it is also treated as a header. All other rowscontain values for the corresponding columns.
The column names must contain the model's[input_feature_column_specs'][google.cloud.automl.v1.TablesModelMetadata.input_feature_column_specs]display_name-s(order doesn't matter). The columns corresponding to the model'sinput feature column specs must contain values compatible with thecolumn spec's data types. Prediction on all the rows, i.e. the CSVlines, will be attempted.
Sample rows from a CSV file:
.. raw:: html
<pre>"First Name","Last Name","Dob","Addresses""John","Doe","1968-01-22","[{"status":"current","address":"123_First_Avenue","city":"Seattle","state":"WA","zip":"11111","numberOfYears":"1"},{"status":"previous","address":"456_Main_Street","city":"Portland","state":"OR","zip":"22222","numberOfYears":"5"}]""Jane","Doe","1980-10-16","[{"status":"current","address":"789_Any_Avenue","city":"Albany","state":"NY","zip":"33333","numberOfYears":"2"},{"status":"previous","address":"321_Main_Street","city":"Hoboken","state":"NJ","zip":"44444","numberOfYears":"3"}]}</pre>For bigquery_source:
The URI of a BigQuery table. The user data size of the BigQuerytable must be 100GB or smaller.
The column names must contain the model's[input_feature_column_specs'][google.cloud.automl.v1.TablesModelMetadata.input_feature_column_specs]display_name-s(order doesn't matter). The columns corresponding to the model'sinput feature column specs must contain values compatible with thecolumn spec's data types. Prediction on all the rows of the tablewill be attempted.
Input field definitions:
GCS_FILE_PATH : The path to a file on Google Cloud Storage. Forexample, "gs://folder/video.avi".
TIME_SEGMENT_START : (TIME_OFFSET) Expresses a beginning,inclusive, of a time segment within an example that has a timedimension (e.g. video).
TIME_SEGMENT_END : (TIME_OFFSET) Expresses an end,exclusive, of a time segment within n example that has a timedimension (e.g. video).
TIME_OFFSET : A number of seconds as measured from the start ofan example (e.g. video). Fractions are allowed, up to a microsecondprecision. "inf" is allowed, and it means the end of the example.
Errors:
If any of the provided CSV files can't be parsed or if more thancertain percent of CSV rows cannot be processed then the operationfails and prediction does not happen. Regardless of overall successor failure the per-row failures, up to a certain count cap, will belisted in Operation.metadata.partial_failures.
.. _oneof:https://proto-plus-python.readthedocs.io/en/stable/fields.html#oneofs-mutually-exclusive-fields
BatchPredictOperationMetadata
Details of BatchPredict operation.
BatchPredictOutputConfig
Output configuration for BatchPredict Action.
As destination thegcs_destinationmust be set unless specified otherwise for a domain. Ifgcs_destination is set then in the given directory a new directoryis created. Its name will be "prediction--", where timestamp is inYYYY-MM-DDThh:mm:ss.sssZ ISO-8601 format. The contents of it dependson the ML problem the predictions are made for.
For Image Classification: In the created directory files
image_classification_1.jsonl,image_classification_2.jsonl,...,\image_classification_N.jsonlwill be created, where N may be 1, and depends on the total numberof the successfully predicted images and annotations. A singleimage will be listed only once with all its annotations, and itsannotations will never be split across files. Each .JSONL filewill contain, per line, a JSON representation of a proto thatwraps image's "ID" : "<id_value>" followed by a list of zero ormore AnnotationPayload protos (called annotations), which haveclassification detail populated. If prediction for any imagefailed (partially or completely), then an additionalerrors_1.jsonl,errors_2.jsonl,...,errors_N.jsonlfiles will be created (N depends on total number of failedpredictions). These files will have a JSON representation of aproto that wraps the same "ID" : "<id_value>" but here followed byexactly one`google.rpc.Statushttps://github.com/googleapis/googleapis/blob/master/google/rpc/status.proto__containing onlycodeandmessage`\ fields.For Image Object Detection: In the created directory files
image_object_detection_1.jsonl,image_object_detection_2.jsonl,...,\image_object_detection_N.jsonlwill be created, where N may be 1, and depends on the total numberof the successfully predicted images and annotations. Each .JSONLfile will contain, per line, a JSON representation of a proto thatwraps image's "ID" : "<id_value>" followed by a list of zero ormore AnnotationPayload protos (called annotations), which haveimage_object_detection detail populated. A single image will belisted only once with all its annotations, and its annotationswill never be split across files. If prediction for any imagefailed (partially or completely), then additionalerrors_1.jsonl,errors_2.jsonl,...,errors_N.jsonlfiles will be created (N depends on total number of failedpredictions). These files will have a JSON representation of aproto that wraps the same "ID" : "<id_value>" but here followed byexactly one`google.rpc.Statushttps://github.com/googleapis/googleapis/blob/master/google/rpc/status.proto__containing onlycodeandmessage`\ fields.For Video Classification: In the created directory avideo_classification.csv file, and a .JSON file per each videoclassification requested in the input (i.e. each line in givenCSV(s)), will be created.
::
The format of video_classification.csv is: GCS_FILE_PATH,TIME_SEGMENT_START,TIME_SEGMENT_END,JSON_FILE_NAME,STATUS where: GCS_FILE_PATH,TIME_SEGMENT_START,TIME_SEGMENT_END = matches 1 to 1 the prediction input lines (i.e. video_classification.csv has precisely the same number of lines as the prediction input had.) JSON_FILE_NAME = Name of .JSON file in the output directory, which contains prediction responses for the video time segment. STATUS = "OK" if prediction completed successfully, or an error code with message otherwise. If STATUS is not "OK" then the .JSON file for that line may not exist or be empty.
Each .JSON file, assuming STATUS is "OK", will contain a list of AnnotationPayload protos in JSON format, which are the predictions for the video time segment the file is assigned to in the video_classification.csv. All AnnotationPayload protos will have video_classification field set, and will be sorted by video_classification.type field (note that the returned types are governed by
classifaction_typesparameter in [PredictService.BatchPredictRequest.params][]).For Video Object Tracking: In the created directory avideo_object_tracking.csv file will be created, and multiple filesvideo_object_trackinng_1.json, video_object_trackinng_2.json,...,video_object_trackinng_N.json, where N is the number of requestsin the input (i.e. the number of lines in given CSV(s)).
::
The format of video_object_tracking.csv is: GCS_FILE_PATH,TIME_SEGMENT_START,TIME_SEGMENT_END,JSON_FILE_NAME,STATUS where: GCS_FILE_PATH,TIME_SEGMENT_START,TIME_SEGMENT_END = matches 1 to 1 the prediction input lines (i.e. video_object_tracking.csv has precisely the same number of lines as the prediction input had.) JSON_FILE_NAME = Name of .JSON file in the output directory, which contains prediction responses for the video time segment. STATUS = "OK" if prediction completed successfully, or an error code with message otherwise. If STATUS is not "OK" then the .JSON file for that line may not exist or be empty.
Each .JSON file, assuming STATUS is "OK", will contain a list of AnnotationPayload protos in JSON format, which are the predictions for each frame of the video time segment the file is assigned to in video_object_tracking.csv. All AnnotationPayload protos will have video_object_tracking field set.
For Text Classification: In the created directory files
text_classification_1.jsonl,text_classification_2.jsonl,...,\text_classification_N.jsonlwill be created, where N may be 1, and depends on the total numberof inputs and annotations found.::
Each .JSONL file will contain, per line, a JSON representation of a proto that wraps input text file (or document) in the text snippet (or document) proto and a list of zero or more AnnotationPayload protos (called annotations), which have classification detail populated. A single text file (or document) will be listed only once with all its annotations, and its annotations will never be split across files.
If prediction for any input file (or document) failed (partially or completely), then additional
errors_1.jsonl,errors_2.jsonl,...,errors_N.jsonlfiles will be created (N depends on total number of failed predictions). These files will have a JSON representation of a proto that wraps input file followed by exactly onegoogle.rpc.Statuscontaining onlycodeandmessage.For Text Sentiment: In the created directory files
text_sentiment_1.jsonl,text_sentiment_2.jsonl,...,\text_sentiment_N.jsonlwillbe created, where N may be 1, and depends on the total number ofinputs and annotations found.::
Each .JSONL file will contain, per line, a JSON representation of a proto that wraps input text file (or document) in the text snippet (or document) proto and a list of zero or more AnnotationPayload protos (called annotations), which have text_sentiment detail populated. A single text file (or document) will be listed only once with all its annotations, and its annotations will never be split across files.
If prediction for any input file (or document) failed (partially or completely), then additional
errors_1.jsonl,errors_2.jsonl,...,errors_N.jsonlfiles will be created (N depends on total number of failed predictions). These files will have a JSON representation of a proto that wraps input file followed by exactly onegoogle.rpc.Statuscontaining onlycodeandmessage.For Text Extraction: In the created directory files
text_extraction_1.jsonl,text_extraction_2.jsonl,...,\text_extraction_N.jsonlwillbe created, where N may be 1, and depends on the total number ofinputs and annotations found. The contents of these .JSONL file(s)depend on whether the input used inline text, or documents. Ifinput was inline, then each .JSONL file will contain, per line, aJSON representation of a proto that wraps given in request textsnippet's "id" (if specified), followed by input text snippet, anda list of zero or more AnnotationPayload protos (calledannotations), which have text_extraction detail populated. Asingle text snippet will be listed only once with all itsannotations, and its annotations will never be split across files.If input used documents, then each .JSONL file will contain, perline, a JSON representation of a proto that wraps given in requestdocument proto, followed by its OCR-ed representation in the formof a text snippet, finally followed by a list of zero or moreAnnotationPayload protos (called annotations), which havetext_extraction detail populated and refer, via their indices, tothe OCR-ed text snippet. A single document (and its text snippet)will be listed only once with all its annotations, and itsannotations will never be split across files. If prediction forany text snippet failed (partially or completely), then additionalerrors_1.jsonl,errors_2.jsonl,...,errors_N.jsonlfiles will be created (N depends on total number of failedpredictions). These files will have a JSON representation of aproto that wraps either the "id" : "<id_value>" (in case ofinline) or the document proto (in case of document) but herefollowed by exactly one`google.rpc.Statushttps://github.com/googleapis/googleapis/blob/master/google/rpc/status.proto__containing onlycodeandmessage`.For Tables: Output depends on whethergcs_destinationorbigquery_destinationis set (either is allowed). Google Cloud Storage case: In thecreated directory files
tables_1.csv,tables_2.csv,...,tables_N.csvwill be created, where N may be 1, and depends onthe total number of the successfully predicted rows. For allCLASSIFICATIONprediction_type-s:Each .csv file will contain a header, listing all columns'display_name-sgiven on input followed by M target column names in the format of"<target_column_specsdisplay_name>\\ score"where M is the number of distinct target values, i.e. number ofdistinct values in the target column of the table used to trainthe model. Subsequent lines will contain the respective values ofsuccessfully predicted rows, with the last, i.e. the target,columns having the corresponding predictionscores. ForREGRESSION and FORECASTINGprediction_type-s:Each .csv file will contain a header, listing all columns'display_name-s givenon input followed by the predicted target column with name in theformat of"predicted\ <target_column_specsdisplay_name>"Subsequent lines will contain the respective values ofsuccessfully predicted rows, with the last, i.e. the target,column having the predicted target value. If prediction for anyrows failed, then an additionalerrors_1.csv,errors_2.csv,...,errors_N.csvwill be created (N dependson total number of failed rows). These files will have analogousformat as `tables_.csv, but always with a single target columnhaving*\ ```google.rpc.Status`` <https://github.com/googleapis/googleapis/blob/master/google/rpc/status.proto>\representedas a JSON string, and containing onlycodeandmessage.BigQuery case:bigquery_destinationpointing to a BigQuery project must be set. In the given project anew dataset will be created with nameprediction_<model-display-name>_<timestamp-of-prediction-call>where will be made BigQuery-dataset-name compatible (e.g. mostspecial characters will become underscores), and timestamp will bein YYYY_MM_DDThh_mm_ss_sssZ "based on ISO-8601" format. In thedataset two tables will be created,predictions, anderrors. Thepredictionstable's column names will be theinput columns'display_name-sfollowed by the target column with name in the format of"predicted\ <target_column_specsdisplay_name>"The input feature columns will contain the respective values ofsuccessfully predicted rows, with the target column having anARRAY ofAnnotationPayloads,represented as STRUCT-s, containingTablesAnnotation.Theerrorstable contains rows for which the prediction hasfailed, it has analogous input columns while the target columnname is in the format of"errors_<target_column_specsdisplay_name>",and as a value has`google.rpc.Statushttps://github.com/googleapis/googleapis/blob/master/google/rpc/status.proto`represented as a STRUCT, and containing onlycodeandmessage.
.. _oneof:https://proto-plus-python.readthedocs.io/en/stable/fields.html#oneofs-mutually-exclusive-fields
BatchPredictRequest
Request message forPredictionService.BatchPredict.
BatchPredictResult
Result of the Batch Predict. This message is returned inresponse][google.longrunning.Operation.response] of the operationreturned by thePredictionService.BatchPredict.
BoundingBoxMetricsEntry
Bounding box matching model metrics for a singleintersection-over-union threshold and multiple label matchconfidence thresholds.
BoundingPoly
A bounding polygon of a detected object on a plane. On output bothvertices and normalized_vertices are provided. The polygon is formedby connecting vertices in the order they are listed.
ClassificationAnnotation
Contains annotation details specific to classification.
ClassificationEvaluationMetrics
Model evaluation metrics for classification problems. Note: ForVideo Classification this metrics only describe quality of the VideoClassification predictions of "segment_classification" type.
ClassificationType
Type of the classification problem.
CreateDatasetOperationMetadata
Details of CreateDataset operation.
CreateDatasetRequest
Request message forAutoMl.CreateDataset.
CreateModelOperationMetadata
Details of CreateModel operation.
CreateModelRequest
Request message forAutoMl.CreateModel.
Dataset
A workspace for solving a single, particular machine learning(ML) problem. A workspace contains examples that may beannotated.
This message hasoneof_ fields (mutually exclusive fields).For each oneof, at most one member field can be set at the same time.Setting any member of the oneof automatically clears all othermembers.
.. _oneof:https://proto-plus-python.readthedocs.io/en/stable/fields.html#oneofs-mutually-exclusive-fields
DeleteDatasetRequest
Request message forAutoMl.DeleteDataset.
DeleteModelRequest
Request message forAutoMl.DeleteModel.
DeleteOperationMetadata
Details of operations that perform deletes of any entities.
DeployModelOperationMetadata
Details of DeployModel operation.
DeployModelRequest
Request message forAutoMl.DeployModel.
This message hasoneof_ fields (mutually exclusive fields).For each oneof, at most one member field can be set at the same time.Setting any member of the oneof automatically clears all othermembers.
.. _oneof:https://proto-plus-python.readthedocs.io/en/stable/fields.html#oneofs-mutually-exclusive-fields
Document
A structured text document e.g. a PDF.
DocumentDimensions
Message that describes dimension of a document.
DocumentInputConfig
Input configuration of aDocument.
ExamplePayload
Example data used for training or prediction.
This message hasoneof_ fields (mutually exclusive fields).For each oneof, at most one member field can be set at the same time.Setting any member of the oneof automatically clears all othermembers.
.. _oneof:https://proto-plus-python.readthedocs.io/en/stable/fields.html#oneofs-mutually-exclusive-fields
ExportDataOperationMetadata
Details of ExportData operation.
ExportDataRequest
Request message forAutoMl.ExportData.
ExportModelOperationMetadata
Details of ExportModel operation.
ExportModelRequest
Request message forAutoMl.ExportModel.Models need to be enabled for exporting, otherwise an error codewill be returned.
GcsDestination
The Google Cloud Storage location where the output is to bewritten to.
GcsSource
The Google Cloud Storage location for the input content.
GetAnnotationSpecRequest
Request message forAutoMl.GetAnnotationSpec.
GetDatasetRequest
Request message forAutoMl.GetDataset.
GetModelEvaluationRequest
Request message forAutoMl.GetModelEvaluation.
GetModelRequest
Request message forAutoMl.GetModel.
Image
A representation of an image.Only images up to 30MB in size are supported.
.. _oneof:https://proto-plus-python.readthedocs.io/en/stable/fields.html#oneofs-mutually-exclusive-fields
ImageClassificationDatasetMetadata
Dataset metadata that is specific to image classification.
ImageClassificationModelDeploymentMetadata
Model deployment metadata specific to Image Classification.
ImageClassificationModelMetadata
Model metadata for image classification.
ImageObjectDetectionAnnotation
Annotation details for image object detection.
ImageObjectDetectionDatasetMetadata
Dataset metadata specific to image object detection.
ImageObjectDetectionEvaluationMetrics
Model evaluation metrics for image object detection problems.Evaluates prediction quality of labeled bounding boxes.
ImageObjectDetectionModelDeploymentMetadata
Model deployment metadata specific to Image Object Detection.
ImageObjectDetectionModelMetadata
Model metadata specific to image object detection.
ImportDataOperationMetadata
Details of ImportData operation.
ImportDataRequest
Request message forAutoMl.ImportData.
InputConfig
Input configuration forAutoMl.ImportDataaction.
The format of input depends on dataset_metadata the Dataset intowhich the import is happening has. As input source thegcs_source isexpected, unless specified otherwise. Additionally any input .CSVfile by itself must be 100MB or smaller, unless specified otherwise.If an "example" file (that is, image, video etc.) with identicalcontent (even if it had differentGCS_FILE_PATH) is mentionedmultiple times, then its label, bounding boxes etc. are appended.The same file should be always provided with the sameML_USE andGCS_FILE_PATH, if it is not, then these values arenondeterministically selected from the given ones.
The formats are represented in EBNF with commas being literal andwith non-terminal symbols defined near the end of this comment. Theformats are:
AutoML Vision:
Classification:
SeePreparing your trainingdata <https://cloud.google.com/vision/automl/docs/prepare>__ formore information.
CSV file(s) with each line in format:
::
ML_USE,GCS_FILE_PATH,LABEL,LABEL,...ML_USE- Identifies the data set that the current row (file)applies to. This value can be one of the following:TRAIN- Rows in this file are used to train the model.TEST- Rows in this file are used to test the model duringtraining.UNASSIGNED- Rows in this file are not categorized. They areAutomatically divided into train and test data. 80% for trainingand 20% for testing.
GCS_FILE_PATH- The Google Cloud Storage location of an imageof up to 30MB in size. Supported extensions: .JPEG, .GIF, .PNG,.WEBP, .BMP, .TIFF, .ICO.LABEL- A label that identifies the object in the image.
For theMULTICLASS classification type, at most oneLABEL isallowed per image. If an image has not yet been labeled, then itshould be mentioned just once with noLABEL.
Some sample rows:
::
TRAIN,gs://folder/image1.jpg,daisyTEST,gs://folder/image2.jpg,dandelion,tulip,roseUNASSIGNED,gs://folder/image3.jpg,daisyUNASSIGNED,gs://folder/image4.jpgObject Detection:
SeePreparing your trainingdata <https://cloud.google.com/vision/automl/object-detection/docs/prepare>__for more information.
A CSV file(s) with each line in format:
::
ML_USE,GCS_FILE_PATH,[LABEL],(BOUNDING_BOX | ,,,,,,,)ML_USE- Identifies the data set that the current row (file)applies to. This value can be one of the following:TRAIN- Rows in this file are used to train the model.TEST- Rows in this file are used to test the model duringtraining.UNASSIGNED- Rows in this file are not categorized. They areAutomatically divided into train and test data. 80% for trainingand 20% for testing.
GCS_FILE_PATH- The Google Cloud Storage location of an imageof up to 30MB in size. Supported extensions: .JPEG, .GIF, .PNG.Each image is assumed to be exhaustively labeled.LABEL- A label that identifies the object in the imagespecified by theBOUNDING_BOX.BOUNDING BOX- The vertices of an object in the example image.The minimum allowedBOUNDING_BOXedge length is 0.01, and nomore than 500BOUNDING_BOXinstances per image are allowed(oneBOUNDING_BOXper line). If an image has no looked forobjects then it should be mentioned just once with no LABEL andthe ",,,,,,," in place of theBOUNDING_BOX.
Four sample rows:
::
TRAIN,gs://folder/image1.png,car,0.1,0.1,,,0.3,0.3,,TRAIN,gs://folder/image1.png,bike,.7,.6,,,.8,.9,,UNASSIGNED,gs://folder/im2.png,car,0.1,0.1,0.2,0.1,0.2,0.3,0.1,0.3TEST,gs://folder/im3.png,,,,,,,,,.. raw:: html
</section></div>AutoML Video Intelligence:
Classification:
SeePreparing your trainingdata <https://cloud.google.com/video-intelligence/automl/docs/prepare>__for more information.
CSV file(s) with each line in format:
::
ML_USE,GCS_FILE_PATHForML_USE, do not useVALIDATE.
GCS_FILE_PATH is the path to another .csv file that describestraining example for a givenML_USE, using the following rowformat:
::
GCS_FILE_PATH,(LABEL,TIME_SEGMENT_START,TIME_SEGMENT_END | ,,)HereGCS_FILE_PATH leads to a video of up to 50GB in size and upto 3h duration. Supported extensions: .MOV, .MPEG4, .MP4, .AVI.
TIME_SEGMENT_START andTIME_SEGMENT_END must be within thelength of the video, and the end time must be after the start time.Any segment of a video which has one or more labels on it, isconsidered a hard negative for all other labels. Any segment with nolabels on it is considered to be unknown. If a whole video isunknown, then it should be mentioned just once with ",," in place ofLABEL, TIME_SEGMENT_START,TIME_SEGMENT_END.
Sample top level CSV file:
::
TRAIN,gs://folder/train_videos.csvTEST,gs://folder/test_videos.csvUNASSIGNED,gs://folder/other_videos.csvSample rows of a CSV file for a particular ML_USE:
::
gs://folder/video1.avi,car,120,180.000021gs://folder/video1.avi,bike,150,180.000021gs://folder/vid2.avi,car,0,60.5gs://folder/vid3.avi,,,Object Tracking:
SeePreparing your trainingdata </video-intelligence/automl/object-tracking/docs/prepare>__for more information.
CSV file(s) with each line in format:
::
ML_USE,GCS_FILE_PATHForML_USE, do not useVALIDATE.
GCS_FILE_PATH is the path to another .csv file that describestraining example for a givenML_USE, using the following rowformat:
::
GCS_FILE_PATH,LABEL,[INSTANCE_ID],TIMESTAMP,BOUNDING_BOXor
::
GCS_FILE_PATH,,,,,,,,,,HereGCS_FILE_PATH leads to a video of up to 50GB in size and upto 3h duration. Supported extensions: .MOV, .MPEG4, .MP4, .AVI.ProvidingINSTANCE_ID\ s can help to obtain a better model. Whena specific labeled entity leaves the video frame, and shows upafterwards it is not required, albeit preferable, that the sameINSTANCE_ID is given to it.
TIMESTAMP must be within the length of the video, theBOUNDING_BOX is assumed to be drawn on the closest video's frameto theTIMESTAMP. Any mentioned by theTIMESTAMP frame isexpected to be exhaustively labeled and no more than 500BOUNDING_BOX-es per frame are allowed. If a whole video isunknown, then it should be mentioned just once with ",,,,,,,,,," inplace ofLABEL, [INSTANCE_ID],TIMESTAMP,BOUNDING_BOX.
Sample top level CSV file:
::
TRAIN,gs://folder/train_videos.csv TEST,gs://folder/test_videos.csv UNASSIGNED,gs://folder/other_videos.csvSeven sample rows of a CSV file for a particular ML_USE:
::
gs://folder/video1.avi,car,1,12.10,0.8,0.8,0.9,0.8,0.9,0.9,0.8,0.9 gs://folder/video1.avi,car,1,12.90,0.4,0.8,0.5,0.8,0.5,0.9,0.4,0.9 gs://folder/video1.avi,car,2,12.10,.4,.2,.5,.2,.5,.3,.4,.3 gs://folder/video1.avi,car,2,12.90,.8,.2,,,.9,.3,, gs://folder/video1.avi,bike,,12.50,.45,.45,,,.55,.55,, gs://folder/video2.avi,car,1,0,.1,.9,,,.9,.1,, gs://folder/video2.avi,,,,,,,,,,,AutoML Natural Language:
Entity Extraction:
SeePreparing your trainingdata </natural-language/automl/entity-analysis/docs/prepare>__ formore information.
One or more CSV file(s) with each line in the following format:
::
ML_USE,GCS_FILE_PATHML_USE- Identifies the data set that the current row (file)applies to. This value can be one of the following:TRAIN- Rows in this file are used to train the model.TEST- Rows in this file are used to test the model duringtraining.UNASSIGNED- Rows in this file are not categorized. They areAutomatically divided into train and test data. 80% for trainingand 20% for testing..
GCS_FILE_PATH- a Identifies JSON Lines (.JSONL) file storedin Google Cloud Storage that contains in-line text in-line asdocuments for model training.
After the training data set has been determined from theTRAINandUNASSIGNED CSV files, the training data is divided intotrain and validation data sets. 70% for training and 30% forvalidation.
For example:
::
TRAIN,gs://folder/file1.jsonlVALIDATE,gs://folder/file2.jsonlTEST,gs://folder/file3.jsonlIn-line JSONL files
In-line .JSONL files contain, per line, a JSON document that wraps a[text_snippet][google.cloud.automl.v1.TextSnippet] fieldfollowed by one or more[annotations][google.cloud.automl.v1.AnnotationPayload] fields,which havedisplay_name andtext_extraction fields todescribe the entity from the text snippet. Multiple JSON documentscan be separated using line breaks (\n).
The supplied text must be annotated exhaustively. For example, ifyou include the text "horse", but do not label it as "animal", then"horse" is assumed to not be an "animal".
Any given text snippet content must have 30,000 characters or less,and also be UTF-8 NFC encoded. ASCII is accepted as it is UTF-8 NFCencoded.
For example:
::
{ "text_snippet": { "content": "dog car cat" }, "annotations": [ { "display_name": "animal", "text_extraction": { "text_segment": {"start_offset": 0, "end_offset": 2} } }, { "display_name": "vehicle", "text_extraction": { "text_segment": {"start_offset": 4, "end_offset": 6} } }, { "display_name": "animal", "text_extraction": { "text_segment": {"start_offset": 8, "end_offset": 10} } } ]}\n{ "text_snippet": { "content": "This dog is good." }, "annotations": [ { "display_name": "animal", "text_extraction": { "text_segment": {"start_offset": 5, "end_offset": 7} } } ]}JSONL files that reference documents
.JSONL files contain, per line, a JSON document that wraps ainput_config that contains the path to a source document.Multiple JSON documents can be separated using line breaks (\n).
Supported document extensions: .PDF, .TIF, .TIFF
For example:
::
{ "document": { "input_config": { "gcs_source": { "input_uris": [ "gs://folder/document1.pdf" ] } } }}\n{ "document": { "input_config": { "gcs_source": { "input_uris": [ "gs://folder/document2.tif" ] } } }}In-line JSONL files with document layout information
Note: You can only annotate documents using the UI. The formatdescribed below applies to annotated documents exported using the UIorexportData.
In-line .JSONL files for documents contain, per line, a JSONdocument that wraps adocument field that provides the textualcontent of the document and the layout information.
For example:
::
{ "document": { "document_text": { "content": "dog car cat" } "layout": [ { "text_segment": { "start_offset": 0, "end_offset": 11, }, "page_number": 1, "bounding_poly": { "normalized_vertices": [ {"x": 0.1, "y": 0.1}, {"x": 0.1, "y": 0.3}, {"x": 0.3, "y": 0.3}, {"x": 0.3, "y": 0.1}, ], }, "text_segment_type": TOKEN, } ], "document_dimensions": { "width": 8.27, "height": 11.69, "unit": INCH, } "page_count": 3, }, "annotations": [ { "display_name": "animal", "text_extraction": { "text_segment": {"start_offset": 0, "end_offset": 3} } }, { "display_name": "vehicle", "text_extraction": { "text_segment": {"start_offset": 4, "end_offset": 7} } }, { "display_name": "animal", "text_extraction": { "text_segment": {"start_offset": 8, "end_offset": 11} } }, ],Classification:
SeePreparing your trainingdata <https://cloud.google.com/natural-language/automl/docs/prepare>__for more information.
One or more CSV file(s) with each line in the following format:
::
ML_USE,(TEXT_SNIPPET | GCS_FILE_PATH),LABEL,LABEL,...ML_USE- Identifies the data set that the current row (file)applies to. This value can be one of the following:TRAIN- Rows in this file are used to train the model.TEST- Rows in this file are used to test the model duringtraining.UNASSIGNED- Rows in this file are not categorized. They areAutomatically divided into train and test data. 80% for trainingand 20% for testing.
TEXT_SNIPPETandGCS_FILE_PATHare distinguished by apattern. If the column content is a valid Google Cloud Storagefile path, that is, prefixed by "gs://", it is treated as aGCS_FILE_PATH. Otherwise, if the content is enclosed in doublequotes (""), it is treated as aTEXT_SNIPPET. ForGCS_FILE_PATH, the path must lead to a file with supportedextension and UTF-8 encoding, for example,"gs://folder/content.txt" AutoML imports the file content as atext snippet. ForTEXT_SNIPPET, AutoML imports the columncontent excluding quotes. In both cases, size of the content mustbe 10MB or less in size. For zip files, the size of each fileinside the zip must be 10MB or less in size.For the
MULTICLASSclassification type, at most oneLABELis allowed.The
ML_USEandLABELcolumns are optional. Supported fileextensions: .TXT, .PDF, .TIF, .TIFF, .ZIP
A maximum of 100 unique labels are allowed per CSV row.
Sample rows:
::
TRAIN,"They have bad food and very rude",RudeService,BadFoodgs://folder/content.txt,SlowServiceTEST,gs://folder/document.pdfVALIDATE,gs://folder/text_files.zip,BadFoodSentiment Analysis:
SeePreparing your trainingdata <https://cloud.google.com/natural-language/automl/docs/prepare>__for more information.
CSV file(s) with each line in format:
::
ML_USE,(TEXT_SNIPPET | GCS_FILE_PATH),SENTIMENTML_USE- Identifies the data set that the current row (file)applies to. This value can be one of the following:TRAIN- Rows in this file are used to train the model.TEST- Rows in this file are used to test the model duringtraining.UNASSIGNED- Rows in this file are not categorized. They areAutomatically divided into train and test data. 80% for trainingand 20% for testing.
TEXT_SNIPPETandGCS_FILE_PATHare distinguished by apattern. If the column content is a valid Google Cloud Storagefile path, that is, prefixed by "gs://", it is treated as aGCS_FILE_PATH. Otherwise, if the content is enclosed in doublequotes (""), it is treated as aTEXT_SNIPPET. ForGCS_FILE_PATH, the path must lead to a file with supportedextension and UTF-8 encoding, for example,"gs://folder/content.txt" AutoML imports the file content as atext snippet. ForTEXT_SNIPPET, AutoML imports the columncontent excluding quotes. In both cases, size of the content mustbe 128kB or less in size. For zip files, the size of each fileinside the zip must be 128kB or less in size.The
ML_USEandSENTIMENTcolumns are optional. Supportedfile extensions: .TXT, .PDF, .TIF, .TIFF, .ZIPSENTIMENT- An integer between 0 andDataset.text_sentiment_dataset_metadata.sentiment_max (inclusive).Describes the ordinal of the sentiment - higher value means a morepositive sentiment. All the values are completely relative, i.e.neither 0 needs to mean a negative or neutral sentiment norsentiment_max needs to mean a positive one - it is just requiredthat 0 is the least positive sentiment in the data, andsentiment_max is the most positive one. The SENTIMENT shouldn't beconfused with "score" or "magnitude" from the previous NaturalLanguage Sentiment Analysis API. All SENTIMENT values between 0and sentiment_max must be represented in the imported data. Onprediction the same 0 to sentiment_max range will be used. Thedifference between neighboring sentiment values needs not to beuniform, e.g. 1 and 2 may be similar whereas the differencebetween 2 and 3 may be large.
Sample rows:
::
TRAIN,"@freewrytin this is way too good for your product",2gs://folder/content.txt,3TEST,gs://folder/document.pdfVALIDATE,gs://folder/text_files.zip,2AutoML Tables:
SeePreparing your trainingdata <https://cloud.google.com/automl-tables/docs/prepare>__ formore information.
You can use eithergcs_source orbigquery_source.All input is concatenated into a singleprimary_table_spec_id
For gcs_source:
CSV file(s), where the first row of the first file is the header,containing unique column names. If the first row of a subsequentfile is the same as the header, then it is also treated as a header.All other rows contain values for the corresponding columns.
Each .CSV file by itself must be 10GB or smaller, and their totalsize must be 100GB or smaller.
First three sample rows of a CSV file:
.. raw:: html
<pre>"Id","First Name","Last Name","Dob","Addresses""1","John","Doe","1968-01-22","[{"status":"current","address":"123_First_Avenue","city":"Seattle","state":"WA","zip":"11111","numberOfYears":"1"},{"status":"previous","address":"456_Main_Street","city":"Portland","state":"OR","zip":"22222","numberOfYears":"5"}]""2","Jane","Doe","1980-10-16","[{"status":"current","address":"789_Any_Avenue","city":"Albany","state":"NY","zip":"33333","numberOfYears":"2"},{"status":"previous","address":"321_Main_Street","city":"Hoboken","state":"NJ","zip":"44444","numberOfYears":"3"}]}</pre>For bigquery_source:
An URI of a BigQuery table. The user data size of the BigQuery tablemust be 100GB or smaller.
An imported table must have between 2 and 1,000 columns, inclusive,and between 1000 and 100,000,000 rows, inclusive. There are at most5 import data running in parallel.
Input field definitions:
ML_USE : ("TRAIN" | "VALIDATE" | "TEST" | "UNASSIGNED")Describes how the given example (file) should be used for modeltraining. "UNASSIGNED" can be used when user has no preference.
GCS_FILE_PATH : The path to a file on Google Cloud Storage. Forexample, "gs://folder/image1.png".
LABEL : A display name of an object on an image, video etc.,e.g. "dog". Must be up to 32 characters long and can consist only ofASCII Latin letters A-Z and a-z, underscores(_), and ASCII digits0-9. For each label an AnnotationSpec is created which display_namebecomes the label; AnnotationSpecs are given back in predictions.
INSTANCE_ID : A positive integer that identifies a specificinstance of a labeled entity on an example. Used e.g. to track twocars on a video while being able to tell apart which one is which.
BOUNDING_BOX : (VERTEX,VERTEX,VERTEX,VERTEX |VERTEX,,,VERTEX,,) A rectangle parallel to the frame of theexample (image, video). If 4 vertices are given they are connectedby edges in the order provided, if 2 are given they are recognizedas diagonally opposite vertices of the rectangle.
VERTEX : (COORDINATE,COORDINATE) First coordinate ishorizontal (x), the second is vertical (y).
COORDINATE : A float in 0 to 1 range, relative to total lengthof image or video in given dimension. For fractions the leadingnon-decimal 0 can be omitted (i.e. 0.3 = .3). Point 0,0 is in topleft.
TIME_SEGMENT_START : (TIME_OFFSET) Expresses a beginning,inclusive, of a time segment within an example that has a timedimension (e.g. video).
TIME_SEGMENT_END : (TIME_OFFSET) Expresses an end,exclusive, of a time segment within n example that has a timedimension (e.g. video).
TIME_OFFSET : A number of seconds as measured from the start ofan example (e.g. video). Fractions are allowed, up to a microsecondprecision. "inf" is allowed, and it means the end of the example.
TEXT_SNIPPET : The content of a text snippet, UTF-8 encoded,enclosed within double quotes ("").
DOCUMENT : A field that provides the textual content withdocument and the layout information.
Errors:
If any of the provided CSV files can't be parsed or if more thancertain percent of CSV rows cannot be processed then the operationfails and nothing is imported. Regardless of overall success orfailure the per-row failures, up to a certain count cap, is listedin Operation.metadata.partial_failures.
.. _oneof:https://proto-plus-python.readthedocs.io/en/stable/fields.html#oneofs-mutually-exclusive-fields
ListDatasetsRequest
Request message forAutoMl.ListDatasets.
ListDatasetsResponse
Response message forAutoMl.ListDatasets.
ListModelEvaluationsRequest
Request message forAutoMl.ListModelEvaluations.
ListModelEvaluationsResponse
Response message forAutoMl.ListModelEvaluations.
ListModelsRequest
Request message forAutoMl.ListModels.
ListModelsResponse
Response message forAutoMl.ListModels.
Model
API proto representing a trained machine learning model.
This message hasoneof_ fields (mutually exclusive fields).For each oneof, at most one member field can be set at the same time.Setting any member of the oneof automatically clears all othermembers.
.. _oneof:https://proto-plus-python.readthedocs.io/en/stable/fields.html#oneofs-mutually-exclusive-fields
ModelEvaluation
Evaluation results of a model.
This message hasoneof_ fields (mutually exclusive fields).For each oneof, at most one member field can be set at the same time.Setting any member of the oneof automatically clears all othermembers.
.. _oneof:https://proto-plus-python.readthedocs.io/en/stable/fields.html#oneofs-mutually-exclusive-fields
ModelExportOutputConfig
Output configuration for ModelExport Action.
.. _oneof:https://proto-plus-python.readthedocs.io/en/stable/fields.html#oneofs-mutually-exclusive-fields
NormalizedVertex
A vertex represents a 2D point in the image.The normalized vertex coordinates are between 0 to 1 fractionsrelative to the original plane (image, video). E.g. if the plane(e.g. whole image) would have size 10 x 20 then a point withnormalized coordinates (0.1, 0.3) would be at the position (1,6) on that plane.
OperationMetadata
Metadata used across all long running operations returned byAutoML API.
This message hasoneof_ fields (mutually exclusive fields).For each oneof, at most one member field can be set at the same time.Setting any member of the oneof automatically clears all othermembers.
.. _oneof:https://proto-plus-python.readthedocs.io/en/stable/fields.html#oneofs-mutually-exclusive-fields
OutputConfig
For Translation: CSV file
translation.csv, with each line informat: ML_USE,GCS_FILE_PATH GCS_FILE_PATH leads to a .TSV filewhich describes examples that have given ML_USE, using thefollowing row format per line: TEXT_SNIPPET (in source language)\t TEXT_SNIPPET (in target language)- For Tables: Output depends on whether the dataset was importedfrom Google Cloud Storage or BigQuery. Google Cloud Storagecase:gcs_destinationmust be set. Exported are CSV file(s)
tables_1.csv,tables_2.csv,...,\tables_N.csvwith each having asheader line the table's column names, and all other linescontain values for the header columns. BigQuery case:bigquery_destinationpointing to a BigQuery project must be set. In the given projecta new dataset will be created with nameexport_data_<automl-dataset-display-name>_<timestamp-of-export-call>where will be made BigQuery-dataset-name compatible (e.g. mostspecial characters will become underscores), and timestamp willbe in YYYY_MM_DDThh_mm_ss_sssZ "based on ISO-8601" format. Inthat dataset a new table calledprimary_tablewill becreated, and filled with precisely the same data as thisobtained on import.
- For Tables: Output depends on whether the dataset was importedfrom Google Cloud Storage or BigQuery. Google Cloud Storagecase:gcs_destinationmust be set. Exported are CSV file(s)
.. _oneof:https://proto-plus-python.readthedocs.io/en/stable/fields.html#oneofs-mutually-exclusive-fields
PredictRequest
Request message forPredictionService.Predict.
PredictResponse
Response message forPredictionService.Predict.
TextClassificationDatasetMetadata
Dataset metadata for classification.
TextClassificationModelMetadata
Model metadata that is specific to text classification.
TextExtractionAnnotation
Annotation for identifying spans of text.
.. _oneof:https://proto-plus-python.readthedocs.io/en/stable/fields.html#oneofs-mutually-exclusive-fields
TextExtractionDatasetMetadata
Dataset metadata that is specific to text extraction
TextExtractionEvaluationMetrics
Model evaluation metrics for text extraction problems.
TextExtractionModelMetadata
Model metadata that is specific to text extraction.
TextSegment
A contiguous part of a text (string), assuming it has anUTF-8 NFC encoding.
TextSentimentAnnotation
Contains annotation details specific to text sentiment.
TextSentimentDatasetMetadata
Dataset metadata for text sentiment.
TextSentimentEvaluationMetrics
Model evaluation metrics for text sentiment problems.
TextSentimentModelMetadata
Model metadata that is specific to text sentiment.
TextSnippet
A representation of a text snippet.
TranslationAnnotation
Annotation details specific to translation.
TranslationDatasetMetadata
Dataset metadata that is specific to translation.
TranslationEvaluationMetrics
Evaluation metrics for the dataset.
TranslationModelMetadata
Model metadata that is specific to translation.
UndeployModelOperationMetadata
Details of UndeployModel operation.
UndeployModelRequest
Request message forAutoMl.UndeployModel.
UpdateDatasetRequest
Request message forAutoMl.UpdateDataset
UpdateModelRequest
Request message forAutoMl.UpdateModel
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-10-30 UTC.