Package types (2.17.0)

API documentation forautoml_v1beta1.types package.

Classes

AnnotationPayload

Contains annotation information that is relevant to AutoML.

This message hasoneof_ fields (mutually exclusive fields).For each oneof, at most one member field can be set at the same time.Setting any member of the oneof automatically clears all othermembers.

.. _oneof:https://proto-plus-python.readthedocs.io/en/stable/fields.html#oneofs-mutually-exclusive-fields

AnnotationSpec

A definition of an annotation spec.

ArrayStats

The data statistics of a series of ARRAY values.

BatchPredictInputConfig

Input configuration for BatchPredict Action.

The format of input depends on the ML problem of the model used forprediction. As input source thegcs_source isexpected, unless specified otherwise.

The formats are represented in EBNF with commas being literal andwith non-terminal symbols defined near the end of this comment. Theformats are:

  • For Image Classification: CSV file(s) with each line having just asingle column: GCS_FILE_PATH which leads to image of up to 30MB insize. Supported extensions: .JPEG, .GIF, .PNG. This path istreated as the ID in the Batch predict output. Three sample rows:gs://folder/image1.jpeg gs://folder/image2.gifgs://folder/image3.png

  • For Image Object Detection: CSV file(s) with each line having justa single column: GCS_FILE_PATH which leads to image of up to 30MBin size. Supported extensions: .JPEG, .GIF, .PNG. This path istreated as the ID in the Batch predict output. Three sample rows:gs://folder/image1.jpeg gs://folder/image2.gifgs://folder/image3.png

  • For Video Classification: CSV file(s) with each line in format:GCS_FILE_PATH,TIME_SEGMENT_START,TIME_SEGMENT_END GCS_FILE_PATHleads to video of up to 50GB in size and up to 3h duration.Supported extensions: .MOV, .MPEG4, .MP4, .AVI. TIME_SEGMENT_STARTand TIME_SEGMENT_END must be within the length of the video, andend has to be after the start. Three sample rows:gs://folder/video1.mp4,10,40 gs://folder/video1.mp4,20,60gs://folder/vid2.mov,0,inf

  • For Video Object Tracking: CSV file(s) with each line in format:GCS_FILE_PATH,TIME_SEGMENT_START,TIME_SEGMENT_END GCS_FILE_PATHleads to video of up to 50GB in size and up to 3h duration.Supported extensions: .MOV, .MPEG4, .MP4, .AVI. TIME_SEGMENT_STARTand TIME_SEGMENT_END must be within the length of the video, andend has to be after the start. Three sample rows:gs://folder/video1.mp4,10,240 gs://folder/video1.mp4,300,360gs://folder/vid2.mov,0,inf

  • For Text Classification: CSV file(s) with each line having just asingle column: GCS_FILE_PATH | TEXT_SNIPPET Any given text filecan have size upto 128kB. Any given text snippet content must have60,000 characters or less. Three sample rows:gs://folder/text1.txt "Some text content to predict"gs://folder/text3.pdf Supported file extensions: .txt, .pdf

  • For Text Sentiment: CSV file(s) with each line having just asingle column: GCS_FILE_PATH | TEXT_SNIPPET Any given text filecan have size upto 128kB. Any given text snippet content must have500 characters or less. Three sample rows: gs://folder/text1.txt"Some text content to predict" gs://folder/text3.pdf Supportedfile extensions: .txt, .pdf

  • For Text Extraction .JSONL (i.e. JSON Lines) file(s) which eitherprovide text in-line or as documents (for a single BatchPredictcall only one of the these formats may be used). The in-line.JSONL file(s) contain per line a proto that wraps a temporaryuser-assigned TextSnippet ID (string up to 2000 characters long)called "id", a TextSnippet proto (in json representation) and zeroor more TextFeature protos. Any given text snippet content musthave 30,000 characters or less, and also be UTF-8 NFC encoded(ASCII already is). The IDs provided should be unique. Thedocument .JSONL file(s) contain, per line, a proto that wraps aDocument proto with input_config set. Only PDF documents aresupported now, and each document must be up to 2MB large. Anygiven .JSONL file must be 100MB or smaller, and no more than 20files may be given. Sample in-line JSON Lines file (presented herewith artificial line breaks, but the only actual line break isdenoted by \n): { "id": "my_first_id", "text_snippet": {"content": "dog car cat"}, "text_features": [ { "text_segment":{"start_offset": 4, "end_offset": 6}, "structural_type":PARAGRAPH, "bounding_poly": { "normalized_vertices": [ {"x": 0.1,"y": 0.1}, {"x": 0.1, "y": 0.3}, {"x": 0.3, "y": 0.3}, {"x": 0.3,"y": 0.1}, ] }, } ], }\n { "id": "2", "text_snippet": {"content": "An elaborate content", "mime_type": "text/plain" } }Sample document JSON Lines file (presented here with artificialline breaks, but the only actual line break is denoted by \n).: {"document": { "input_config": { "gcs_source": { "input_uris": ["gs://folder/document1.pdf" ] } } } }\n { "document": {"input_config": { "gcs_source": { "input_uris": ["gs://folder/document2.pdf" ] } } } }

  • For Tables: Eithergcs_sourceor

bigquery_source.GCS case: CSV file(s), each by itself 10GB or smaller and total sizemust be 100GB or smaller, where first file must have a headercontaining column names. If the first row of a subsequent file isthe same as the header, then it is also treated as a header. Allother rows contain values for the corresponding columns. The columnnames must contain the model's

[input_feature_column_specs'][google.cloud.automl.v1beta1.TablesModelMetadata.input_feature_column_specs]

display_name-s(order doesn't matter). The columns corresponding to the model'sinput feature column specs must contain values compatible with thecolumn spec's data types. Prediction on all the rows, i.e. the CSVlines, will be attempted. For FORECASTING

prediction_type:all columns having

TIME_SERIES_AVAILABLE_PAST_ONLYtype will be ignored. First three sample rows of a CSV file: "FirstName","Last Name","Dob","Addresses"

"John","Doe","1968-01-22","[{"status":"current","address":"123_First_Avenue","city":"Seattle","state":"WA","zip":"11111","numberOfYears":"1"},{"status":"previous","address":"456_Main_Street","city":"Portland","state":"OR","zip":"22222","numberOfYears":"5"}]"

"Jane","Doe","1980-10-16","[{"status":"current","address":"789_Any_Avenue","city":"Albany","state":"NY","zip":"33333","numberOfYears":"2"},{"status":"previous","address":"321_Main_Street","city":"Hoboken","state":"NJ","zip":"44444","numberOfYears":"3"}]}BigQuery case: An URI of a BigQuery table. The user data size of theBigQuery table must be 100GB or smaller. The column names mustcontain the model's

[input_feature_column_specs'][google.cloud.automl.v1beta1.TablesModelMetadata.input_feature_column_specs]

display_name-s(order doesn't matter). The columns corresponding to the model'sinput feature column specs must contain values compatible with thecolumn spec's data types. Prediction on all the rows of the tablewill be attempted. For FORECASTING

prediction_type:all columns having

TIME_SERIES_AVAILABLE_PAST_ONLYtype will be ignored.

Definitions: GCS_FILE_PATH = A path to file on GCS, e.g."gs://folder/video.avi". TEXT_SNIPPET = A content of a text snippet,UTF-8 encoded, enclosed within double quotes ("") TIME_SEGMENT_START= TIME_OFFSET Expresses a beginning, inclusive, of a time segmentwithin an example that has a time dimension (e.g. video).TIME_SEGMENT_END = TIME_OFFSET Expresses an end, exclusive, of atime segment within an example that has a time dimension (e.g.video). TIME_OFFSET = A number of seconds as measured from the startof an example (e.g. video). Fractions are allowed, up to amicrosecond precision. "inf" is allowed and it means the end of theexample.

Errors: If any of the provided CSV files can't be parsed or if morethan certain percent of CSV rows cannot be processed then theoperation fails and prediction does not happen. Regardless ofoverall success or failure the per-row failures, up to a certaincount cap, will be listed in Operation.metadata.partial_failures.

This message hasoneof_ fields (mutually exclusive fields).For each oneof, at most one member field can be set at the same time.Setting any member of the oneof automatically clears all othermembers.

.. _oneof:https://proto-plus-python.readthedocs.io/en/stable/fields.html#oneofs-mutually-exclusive-fields

BatchPredictOperationMetadata

Details of BatchPredict operation.

BatchPredictOutputConfig

Output configuration for BatchPredict Action.

As destination the

gcs_destinationmust be set unless specified otherwise for a domain. Ifgcs_destination is set then in the given directory a new directoryis created. Its name will be "prediction--", where timestamp is inYYYY-MM-DDThh:mm:ss.sssZ ISO-8601 format. The contents of it dependson the ML problem the predictions are made for.- For Image Classification: In the created directory filesimage_classification_1.jsonl,image_classification_2.jsonl,...,\image_classification_N.jsonl will be created, where N may be 1, and depends on the total number of the successfully predicted images and annotations. A single image will be listed only once with all its annotations, and its annotations will never be split across files. Each .JSONL file will contain, per line, a JSON representation of a proto that wraps image's "ID" : "<id_value>" followed by a list of zero or more AnnotationPayload protos (called annotations), which have classification detail populated. If prediction for any image failed (partially or completely), then an additionalerrors_1.jsonl,errors_2.jsonl,...,errors_N.jsonl files will be created (N depends on total number of failed predictions). These files will have a JSON representation of a proto that wraps the same "ID" : "<id_value>" but here followed by exactly one[google.rpc.Status](https://github.com/googleapis/googleapis/blob/master/google/rpc/status.proto)containing onlycode andmessage\ fields.- For Image Object Detection: In the created directory filesimage_object_detection_1.jsonl,image_object_detection_2.jsonl,...,\image_object_detection_N.jsonl will be created, where N may be 1, and depends on the total number of the successfully predicted images and annotations. Each .JSONL file will contain, per line, a JSON representation of a proto that wraps image's "ID" : "<id_value>" followed by a list of zero or more AnnotationPayload protos (called annotations), which have image_object_detection detail populated. A single image will be listed only once with all its annotations, and its annotations will never be split across files. If prediction for any image failed (partially or completely), then additionalerrors_1.jsonl,errors_2.jsonl,...,errors_N.jsonl files will be created (N depends on total number of failed predictions). These files will have a JSON representation of a proto that wraps the same "ID" : "<id_value>" but here followed by exactly one[google.rpc.Status](https://github.com/googleapis/googleapis/blob/master/google/rpc/status.proto)containing onlycode andmessage\ fields.- For Video Classification: In the created directory a video_classification.csv file, and a .JSON file per each video classification requested in the input (i.e. each line in given CSV(s)), will be created. :: The format of video_classification.csv is:GCS_FILE_PATH,TIME_SEGMENT_START,TIME_SEGMENT_END,JSON_FILE_NAME,STATUSwhere: GCS_FILE_PATH,TIME_SEGMENT_START,TIME_SEGMENT_END = matches 1to 1 the prediction input lines (i.e. video_classification.csv hasprecisely the same number of lines as the prediction input had.)JSON_FILE_NAME = Name of .JSON file in the output directory, whichcontains prediction responses for the video time segment. STATUS ="OK" if prediction completed successfully, or an error code withmessage otherwise. If STATUS is not "OK" then the .JSON file forthat line may not exist or be empty.:: Each .JSON file, assuming STATUS is "OK", will contain a list of AnnotationPayload protos in JSON format, which are the predictions for the video time segment the file is assigned to in the video_classification.csv. All AnnotationPayload protos will have video_classification field set, and will be sorted by video_classification.type field (note that the returned types are governed byclassifaction_types parameter in [PredictService.BatchPredictRequest.params][]).- For Video Object Tracking: In the created directory a video_object_tracking.csv file will be created, and multiple files video_object_trackinng_1.json, video_object_trackinng_2.json,..., video_object_trackinng_N.json, where N is the number of requests in the input (i.e. the number of lines in given CSV(s)). :: The format of video_object_tracking.csv is:GCS_FILE_PATH,TIME_SEGMENT_START,TIME_SEGMENT_END,JSON_FILE_NAME,STATUSwhere: GCS_FILE_PATH,TIME_SEGMENT_START,TIME_SEGMENT_END = matches 1to 1 the prediction input lines (i.e. video_object_tracking.csv hasprecisely the same number of lines as the prediction input had.)JSON_FILE_NAME = Name of .JSON file in the output directory, whichcontains prediction responses for the video time segment. STATUS ="OK" if prediction completed successfully, or an error code withmessage otherwise. If STATUS is not "OK" then the .JSON file forthat line may not exist or be empty.:: Each .JSON file, assuming STATUS is "OK", will contain a list of AnnotationPayload protos in JSON format, which are the predictions for each frame of the video time segment the file is assigned to in video_object_tracking.csv. All AnnotationPayload protos will have video_object_tracking field set.- For Text Classification: In the created directory filestext_classification_1.jsonl,text_classification_2.jsonl,...,\text_classification_N.jsonl will be created, where N may be 1, and depends on the total number of inputs and annotations found. :: Each .JSONL file will contain, per line, a JSON representation of a proto that wraps input text snippet or input text file and a list of zero or more AnnotationPayload protos (called annotations), which have classification detail populated. A single text snippet or file will be listed only once with all its annotations, and its annotations will never be split across files. If prediction for any text snippet or file failed (partially or completely), then additionalerrors_1.jsonl,errors_2.jsonl,...,errors_N.jsonl files will be created (N depends on total number of failed predictions). These files will have a JSON representation of a proto that wraps input text snippet or input text file followed by exactly one[google.rpc.Status](https://github.com/googleapis/googleapis/blob/master/google/rpc/status.proto)containing onlycode andmessage.- For Text Sentiment: In the created directory filestext_sentiment_1.jsonl,text_sentiment_2.jsonl,...,\text_sentiment_N.jsonl will be created, where N may be 1, and depends on the total number of inputs and annotations found. :: Each .JSONL file will contain, per line, a JSON representation of a proto that wraps input text snippet or input text file and a list of zero or more AnnotationPayload protos (called annotations), which have text_sentiment detail populated. A single text snippet or file will be listed only once with all its annotations, and its annotations will never be split across files. If prediction for any text snippet or file failed (partially or completely), then additionalerrors_1.jsonl,errors_2.jsonl,...,errors_N.jsonl files will be created (N depends on total number of failed predictions). These files will have a JSON representation of a proto that wraps input text snippet or input text file followed by exactly one[google.rpc.Status](https://github.com/googleapis/googleapis/blob/master/google/rpc/status.proto)containing onlycode andmessage.- For Text Extraction: In the created directory filestext_extraction_1.jsonl,text_extraction_2.jsonl,...,\text_extraction_N.jsonl will be created, where N may be 1, and depends on the total number of inputs and annotations found. The contents of these .JSONL file(s) depend on whether the input used inline text, or documents. If input was inline, then each .JSONL file will contain, per line, a JSON representation of a proto that wraps given in request text snippet's "id" (if specified), followed by input text snippet, and a list of zero or more AnnotationPayload protos (called annotations), which have text_extraction detail populated. A single text snippet will be listed only once with all its annotations, and its annotations will never be split across files. If input used documents, then each .JSONL file will contain, per line, a JSON representation of a proto that wraps given in request document proto, followed by its OCR-ed representation in the form of a text snippet, finally followed by a list of zero or more AnnotationPayload protos (called annotations), which have text_extraction detail populated and refer, via their indices, to the OCR-ed text snippet. A single document (and its text snippet) will be listed only once with all its annotations, and its annotations will never be split across files. If prediction for any text snippet failed (partially or completely), then additionalerrors_1.jsonl,errors_2.jsonl,...,errors_N.jsonl files will be created (N depends on total number of failed predictions). These files will have a JSON representation of a proto that wraps either the "id" : "<id_value>" (in case of inline) or the document proto (in case of document) but here followed by exactly one[google.rpc.Status](https://github.com/googleapis/googleapis/blob/master/google/rpc/status.proto)containing onlycode andmessage.- For Tables: Output depends on whethergcs_destinationorbigquery_destinationis set (either is allowed). GCS case: In the created directory filestables_1.csv,tables_2.csv,...,tables_N.csv will becreated, where N may be 1, and depends on the total number of thesuccessfully predicted rows. For all CLASSIFICATIONprediction_type-s:Each .csv file will contain a header, listing all columns'display_name-sgiven on input followed by M target column names in the format of"<target_column_specsdisplay_name>__score"where M is the number of distinct target values, i.e. number ofdistinct values in the target column of the table used to train themodel. Subsequent lines will contain the respective values ofsuccessfully predicted rows, with the last, i.e. the target, columnshaving the corresponding predictionscores. ForREGRESSION and FORECASTINGprediction_type-s:Each .csv file will contain a header, listing all columns'display_name-s given oninput followed by the predicted target column with name in theformat of"predicted_<target_column_specsdisplay_name>"Subsequent lines will contain the respective values of successfullypredicted rows, with the last, i.e. the target, column having thepredicted target value. If prediction for any rows failed, then anadditionalerrors_1.csv,errors_2.csv,...,errors_N.csvwill be created (N depends on total number of failed rows). Thesefiles will have analogous format astables_*.csv, but alwayswith a single target column having[google.rpc.Status](https://github.com/googleapis/googleapis/blob/master/google/rpc/status.proto)represented as a JSON string, and containing onlycode andmessage. BigQuery case:bigquery_destinationpointing to a BigQuery project must be set. In the given project anew dataset will be created with nameprediction_<model-display-name>_<timestamp-of-prediction-call>where will be made BigQuery-dataset-name compatible (e.g. mostspecial characters will become underscores), and timestamp will bein YYYY_MM_DDThh_mm_ss_sssZ "based on ISO-8601" format. In thedataset two tables will be created,predictions, anderrors.Thepredictions table's column names will be the input columns'display_name-sfollowed by the target column with name in the format of"predicted_<target_column_specsdisplay_name>"The input feature columns will contain the respective values ofsuccessfully predicted rows, with the target column having an ARRAYofAnnotationPayloads,represented as STRUCT-s, containingTablesAnnotation.Theerrors table contains rows for which the prediction hasfailed, it has analogous input columns while the target column nameis in the format of"errors_<target_column_specs

display_name>",and as a value has

[google.rpc.Status](https://github.com/googleapis/googleapis/blob/master/google/rpc/status.proto)represented as a STRUCT, and containing onlycode andmessage.

This message hasoneof_ fields (mutually exclusive fields).For each oneof, at most one member field can be set at the same time.Setting any member of the oneof automatically clears all othermembers.

.. _oneof:https://proto-plus-python.readthedocs.io/en/stable/fields.html#oneofs-mutually-exclusive-fields

BatchPredictRequest

Request message forPredictionService.BatchPredict.

BatchPredictResult

Result of the Batch Predict. This message is returned inresponse][google.longrunning.Operation.response] of the operationreturned by thePredictionService.BatchPredict.

BigQueryDestination

The BigQuery location for the output content.

BigQuerySource

The BigQuery location for the input content.

BoundingBoxMetricsEntry

Bounding box matching model metrics for a singleintersection-over-union threshold and multiple label matchconfidence thresholds.

BoundingPoly

A bounding polygon of a detected object on a plane. On output bothvertices and normalized_vertices are provided. The polygon is formedby connecting vertices in the order they are listed.

CategoryStats

The data statistics of a series of CATEGORY values.

ClassificationAnnotation

Contains annotation details specific to classification.

ClassificationEvaluationMetrics

Model evaluation metrics for classification problems. Note: ForVideo Classification this metrics only describe quality of the VideoClassification predictions of "segment_classification" type.

ClassificationType

Type of the classification problem.

ColumnSpec

A representation of a column in a relational table. When listingthem, column specs are returned in the same order in which they weregiven on import . Used by:

  • Tables

CorrelationStats

A correlation statistics between two series of DataTypevalues. The series may have differing DataType-s, but within asingle series the DataType must be the same.

CreateDatasetRequest

Request message forAutoMl.CreateDataset.

CreateModelOperationMetadata

Details of CreateModel operation.

CreateModelRequest

Request message forAutoMl.CreateModel.

DataStats

The data statistics of a series of values that share the sameDataType.

This message hasoneof_ fields (mutually exclusive fields).For each oneof, at most one member field can be set at the same time.Setting any member of the oneof automatically clears all othermembers.

.. _oneof:https://proto-plus-python.readthedocs.io/en/stable/fields.html#oneofs-mutually-exclusive-fields

DataType

Indicated the type of data that can be stored in a structureddata entity (e.g. a table).

This message hasoneof_ fields (mutually exclusive fields).For each oneof, at most one member field can be set at the same time.Setting any member of the oneof automatically clears all othermembers.

.. _oneof:https://proto-plus-python.readthedocs.io/en/stable/fields.html#oneofs-mutually-exclusive-fields

Dataset

A workspace for solving a single, particular machine learning(ML) problem. A workspace contains examples that may beannotated.

This message hasoneof_ fields (mutually exclusive fields).For each oneof, at most one member field can be set at the same time.Setting any member of the oneof automatically clears all othermembers.

.. _oneof:https://proto-plus-python.readthedocs.io/en/stable/fields.html#oneofs-mutually-exclusive-fields

DeleteDatasetRequest

Request message forAutoMl.DeleteDataset.

DeleteModelRequest

Request message forAutoMl.DeleteModel.

DeleteOperationMetadata

Details of operations that perform deletes of any entities.

DeployModelOperationMetadata

Details of DeployModel operation.

DeployModelRequest

Request message forAutoMl.DeployModel.

This message hasoneof_ fields (mutually exclusive fields).For each oneof, at most one member field can be set at the same time.Setting any member of the oneof automatically clears all othermembers.

.. _oneof:https://proto-plus-python.readthedocs.io/en/stable/fields.html#oneofs-mutually-exclusive-fields

Document

A structured text document e.g. a PDF.

DocumentDimensions

Message that describes dimension of a document.

DocumentInputConfig

Input configuration of aDocument.

DoubleRange

A range between two double numbers.

ExamplePayload

Example data used for training or prediction.

This message hasoneof_ fields (mutually exclusive fields).For each oneof, at most one member field can be set at the same time.Setting any member of the oneof automatically clears all othermembers.

.. _oneof:https://proto-plus-python.readthedocs.io/en/stable/fields.html#oneofs-mutually-exclusive-fields

ExportDataOperationMetadata

Details of ExportData operation.

ExportDataRequest

Request message forAutoMl.ExportData.

ExportEvaluatedExamplesOperationMetadata

Details of EvaluatedExamples operation.

ExportEvaluatedExamplesOutputConfig

Output configuration for ExportEvaluatedExamples Action. Note thatthis call is available only for 30 days since the moment the modelwas evaluated. The output depends on the domain, as follows (notethat only examples from the TEST set are exported):

  • For Tables:

bigquery_destinationpointing to a BigQuery project must be set. In the given project anew dataset will be created with name

export_evaluated_examples_<model-display-name>_<timestamp-of-export-call>where will be made BigQuery-dataset-name compatible (e.g. mostspecial characters will become underscores), and timestamp will bein YYYY_MM_DDThh_mm_ss_sssZ "based on ISO-8601" format. In thedataset anevaluated_examples table will be created. It willhave all the same columns as the

primary_tableof thedataset fromwhich the model was created, as they were at the moment of model'sevaluation (this includes the target column with its ground truth),followed by a column called "predicted_<target_column>". That lastcolumn will contain the model's prediction result for eachrespective row, given as ARRAY ofAnnotationPayloads,represented as STRUCT-s, containingTablesAnnotation.

.. _oneof:https://proto-plus-python.readthedocs.io/en/stable/fields.html#oneofs-mutually-exclusive-fields

ExportEvaluatedExamplesRequest

Request message forAutoMl.ExportEvaluatedExamples.

ExportModelOperationMetadata

Details of ExportModel operation.

ExportModelRequest

Request message forAutoMl.ExportModel.Models need to be enabled for exporting, otherwise an error codewill be returned.

Float64Stats

The data statistics of a series of FLOAT64 values.

GcrDestination

The GCR location where the image must be pushed to.

GcsDestination

The Google Cloud Storage location where the output is to bewritten to.

GcsSource

The Google Cloud Storage location for the input content.

GetAnnotationSpecRequest

Request message forAutoMl.GetAnnotationSpec.

GetColumnSpecRequest

Request message forAutoMl.GetColumnSpec.

GetDatasetRequest

Request message forAutoMl.GetDataset.

GetModelEvaluationRequest

Request message forAutoMl.GetModelEvaluation.

GetModelRequest

Request message forAutoMl.GetModel.

GetTableSpecRequest

Request message forAutoMl.GetTableSpec.

Image

A representation of an image.Only images up to 30MB in size are supported.

This message hasoneof_ fields (mutually exclusive fields).For each oneof, at most one member field can be set at the same time.Setting any member of the oneof automatically clears all othermembers.

.. _oneof:https://proto-plus-python.readthedocs.io/en/stable/fields.html#oneofs-mutually-exclusive-fields

ImageClassificationDatasetMetadata

Dataset metadata that is specific to image classification.

ImageClassificationModelDeploymentMetadata

Model deployment metadata specific to Image Classification.

ImageClassificationModelMetadata

Model metadata for image classification.

ImageObjectDetectionAnnotation

Annotation details for image object detection.

ImageObjectDetectionDatasetMetadata

Dataset metadata specific to image object detection.

ImageObjectDetectionEvaluationMetrics

Model evaluation metrics for image object detection problems.Evaluates prediction quality of labeled bounding boxes.

ImageObjectDetectionModelDeploymentMetadata

Model deployment metadata specific to Image Object Detection.

ImageObjectDetectionModelMetadata

Model metadata specific to image object detection.

ImportDataOperationMetadata

Details of ImportData operation.

ImportDataRequest

Request message forAutoMl.ImportData.

InputConfig

Input configuration for ImportData Action.

The format of input depends on dataset_metadata the Dataset intowhich the import is happening has. As input source thegcs_source isexpected, unless specified otherwise. Additionally any input .CSVfile by itself must be 100MB or smaller, unless specified otherwise.If an "example" file (that is, image, video etc.) with identicalcontent (even if it had different GCS_FILE_PATH) is mentionedmultiple times, then its label, bounding boxes etc. are appended.The same file should be always provided with the same ML_USE andGCS_FILE_PATH, if it is not, then these values arenondeterministically selected from the given ones.

The formats are represented in EBNF with commas being literal andwith non-terminal symbols defined near the end of this comment. Theformats are:

  • For Image Classification: CSV file(s) with each line in format:ML_USE,GCS_FILE_PATH,LABEL,LABEL,... GCS_FILE_PATH leads to imageof up to 30MB in size. Supported extensions: .JPEG, .GIF, .PNG,.WEBP, .BMP, .TIFF, .ICO For MULTICLASS classification type, atmost one LABEL is allowed per image. If an image has not yet beenlabeled, then it should be mentioned just once with no LABEL. Somesample rows: TRAIN,gs://folder/image1.jpg,daisyTEST,gs://folder/image2.jpg,dandelion,tulip,roseUNASSIGNED,gs://folder/image3.jpg,daisyUNASSIGNED,gs://folder/image4.jpg

  • For Image Object Detection: CSV file(s) with each line in format:ML_USE,GCS_FILE_PATH,(LABEL,BOUNDING_BOX | ,,,,,,,) GCS_FILE_PATHleads to image of up to 30MB in size. Supported extensions: .JPEG,.GIF, .PNG. Each image is assumed to be exhaustively labeled. Theminimum allowed BOUNDING_BOX edge length is 0.01, and no more than500 BOUNDING_BOX-es per image are allowed (one BOUNDING_BOX isdefined per line). If an image has not yet been labeled, then itshould be mentioned just once with no LABEL and the ",,,,,,," inplace of the BOUNDING_BOX. For images which are known to notcontain any bounding boxes, they should be labelled explictly as"NEGATIVE_IMAGE", followed by ",,,,,,," in place of theBOUNDING_BOX. Sample rows:TRAIN,gs://folder/image1.png,car,0.1,0.1,,,0.3,0.3,,TRAIN,gs://folder/image1.png,bike,.7,.6,,,.8,.9,,UNASSIGNED,gs://folder/im2.png,car,0.1,0.1,0.2,0.1,0.2,0.3,0.1,0.3TEST,gs://folder/im3.png,,,,,,,,,TRAIN,gs://folder/im4.png,NEGATIVE_IMAGE,,,,,,,,,

  • For Video Classification: CSV file(s) with each line in format:ML_USE,GCS_FILE_PATH where ML_USE VALIDATE value should not beused. The GCS_FILE_PATH should lead to another .csv file whichdescribes examples that have given ML_USE, using the following rowformat: GCS_FILE_PATH,(LABEL,TIME_SEGMENT_START,TIME_SEGMENT_END| ,,) Here GCS_FILE_PATH leads to a video of up to 50GB in sizeand up to 3h duration. Supported extensions: .MOV, .MPEG4, .MP4,.AVI. TIME_SEGMENT_START and TIME_SEGMENT_END must be within thelength of the video, and end has to be after the start. Anysegment of a video which has one or more labels on it, isconsidered a hard negative for all other labels. Any segment withno labels on it is considered to be unknown. If a whole video isunknown, then it shuold be mentioned just once with ",," in placeof LABEL, TIME_SEGMENT_START,TIME_SEGMENT_END. Sample top levelCSV file: TRAIN,gs://folder/train_videos.csvTEST,gs://folder/test_videos.csvUNASSIGNED,gs://folder/other_videos.csv Sample rows of a CSV filefor a particular ML_USE: gs://folder/video1.avi,car,120,180.000021gs://folder/video1.avi,bike,150,180.000021gs://folder/vid2.avi,car,0,60.5 gs://folder/vid3.avi,,,

  • For Video Object Tracking: CSV file(s) with each line in format:ML_USE,GCS_FILE_PATH where ML_USE VALIDATE value should not beused. The GCS_FILE_PATH should lead to another .csv file whichdescribes examples that have given ML_USE, using one of thefollowing row format:GCS_FILE_PATH,LABEL,[INSTANCE_ID],TIMESTAMP,BOUNDING_BOX orGCS_FILE_PATH,,,,,,,,,, Here GCS_FILE_PATH leads to a video of upto 50GB in size and up to 3h duration. Supported extensions: .MOV,.MPEG4, .MP4, .AVI. Providing INSTANCE_IDs can help to obtain abetter model. When a specific labeled entity leaves the videoframe, and shows up afterwards it is not required, albeitpreferable, that the same INSTANCE_ID is given to it. TIMESTAMPmust be within the length of the video, the BOUNDING_BOX isassumed to be drawn on the closest video's frame to the TIMESTAMP.Any mentioned by the TIMESTAMP frame is expected to beexhaustively labeled and no more than 500 BOUNDING_BOX-es perframe are allowed. If a whole video is unknown, then it should bementioned just once with ",,,,,,,,,," in place of LABEL,[INSTANCE_ID],TIMESTAMP,BOUNDING_BOX. Sample top level CSV file:TRAIN,gs://folder/train_videos.csvTEST,gs://folder/test_videos.csvUNASSIGNED,gs://folder/other_videos.csv Seven sample rows of a CSVfile for a particular ML_USE:gs://folder/video1.avi,car,1,12.10,0.8,0.8,0.9,0.8,0.9,0.9,0.8,0.9gs://folder/video1.avi,car,1,12.90,0.4,0.8,0.5,0.8,0.5,0.9,0.4,0.9gs://folder/video1.avi,car,2,12.10,.4,.2,.5,.2,.5,.3,.4,.3gs://folder/video1.avi,car,2,12.90,.8,.2,,,.9,.3,,gs://folder/video1.avi,bike,,12.50,.45,.45,,,.55,.55,,gs://folder/video2.avi,car,1,0,.1,.9,,,.9,.1,,gs://folder/video2.avi,,,,,,,,,,,

  • For Text Extraction: CSV file(s) with each line in format:ML_USE,GCS_FILE_PATH GCS_FILE_PATH leads to a .JSONL (that is,JSON Lines) file which either imports text in-line or asdocuments. Any given .JSONL file must be 100MB or smaller. Thein-line .JSONL file contains, per line, a proto that wraps aTextSnippet proto (in json representation) followed by one or moreAnnotationPayload protos (called annotations), which havedisplay_name and text_extraction detail populated. The given textis expected to be annotated exhaustively, for example, if you lookfor animals and text contains "dolphin" that is not labeled, then"dolphin" is assumed to not be an animal. Any given text snippetcontent must be 10KB or smaller, and also be UTF-8 NFC encoded(ASCII already is). The document .JSONL file contains, per line, aproto that wraps a Document proto. The Document proto must haveeither document_text or input_config set. In document_text case,the Document proto may also contain the spatial information of thedocument, including layout, document dimension and page number. Ininput_config case, only PDF documents are supported now, and eachdocument may be up to 2MB large. Currently, annotations ondocuments cannot be specified at import. Three sample CSV rows:TRAIN,gs://folder/file1.jsonl VALIDATE,gs://folder/file2.jsonlTEST,gs://folder/file3.jsonl Sample in-line JSON Lines file forentity extraction (presented here with artificial line breaks, butthe only actual line break is denoted by \n).: { "document": {"document_text": {"content": "dog cat"} "layout": [ {"text_segment": { "start_offset": 0, "end_offset": 3, },"page_number": 1, "bounding_poly": { "normalized_vertices": [{"x": 0.1, "y": 0.1}, {"x": 0.1, "y": 0.3}, {"x": 0.3, "y": 0.3},{"x": 0.3, "y": 0.1}, ], }, "text_segment_type": TOKEN, }, {"text_segment": { "start_offset": 4, "end_offset": 7, },"page_number": 1, "bounding_poly": { "normalized_vertices": [{"x": 0.4, "y": 0.1}, {"x": 0.4, "y": 0.3}, {"x": 0.8, "y": 0.3},{"x": 0.8, "y": 0.1}, ], }, "text_segment_type": TOKEN, }

    ::

         ],     "document_dimensions": {       "width": 8.27,       "height": 11.69,       "unit": INCH,     }     "page_count": 1,   },   "annotations": [     {       "display_name": "animal",       "text_extraction": {"text_segment": {"start_offset": 0,       "end_offset": 3}}     },     {       "display_name": "animal",       "text_extraction": {"text_segment": {"start_offset": 4,       "end_offset": 7}}     }   ], }\n {    "text_snippet": {      "content": "This dog is good."    },    "annotations": [      {        "display_name": "animal",        "text_extraction": {          "text_segment": {"start_offset": 5, "end_offset": 8}        }      }    ] }

    Sample document JSON Lines file (presented here with artificial line breaks, but the only actual line break is denoted by \n).: { "document": { "input_config": { "gcs_source": { "input_uris": [ "gs://folder/document1.pdf" ] } } } }\n { "document": { "input_config": { "gcs_source": { "input_uris": [ "gs://folder/document2.pdf" ] } } } }

  • For Text Classification: CSV file(s) with each line in format:ML_USE,(TEXT_SNIPPET | GCS_FILE_PATH),LABEL,LABEL,...TEXT_SNIPPET and GCS_FILE_PATH are distinguished by a pattern. Ifthe column content is a valid gcs file path, i.e. prefixed by"gs://", it will be treated as a GCS_FILE_PATH, else if thecontent is enclosed within double quotes (""), it is treated as aTEXT_SNIPPET. In the GCS_FILE_PATH case, the path must lead to a.txt file with UTF-8 encoding, for example,"gs://folder/content.txt", and the content in it is extracted as atext snippet. In TEXT_SNIPPET case, the column content excludingquotes is treated as to be imported text snippet. In both cases,the text snippet/file size must be within 128kB. Maximum 100unique labels are allowed per CSV row. Sample rows: TRAIN,"Theyhave bad food and very rude",RudeService,BadFoodTRAIN,gs://folder/content.txt,SlowService TEST,"Typically alwaysbad service there.",RudeService VALIDATE,"Stomach ache togo.",BadFood

  • For Text Sentiment: CSV file(s) with each line in format:ML_USE,(TEXT_SNIPPET | GCS_FILE_PATH),SENTIMENT TEXT_SNIPPET andGCS_FILE_PATH are distinguished by a pattern. If the columncontent is a valid gcs file path, that is, prefixed by "gs://", itis treated as a GCS_FILE_PATH, otherwise it is treated as aTEXT_SNIPPET. In the GCS_FILE_PATH case, the path must lead to a.txt file with UTF-8 encoding, for example,"gs://folder/content.txt", and the content in it is extracted as atext snippet. In TEXT_SNIPPET case, the column content itself istreated as to be imported text snippet. In both cases, the textsnippet must be up to 500 characters long. Sample rows:TRAIN,"@freewrytin this is way too good for your product",2TRAIN,"I need this product so bad",3 TEST,"Thank you for thisproduct.",4 VALIDATE,gs://folder/content.txt,2

  • For Tables: Eithergcs_sourceor

bigquery_sourcecan be used. All inputs is concatenated into a single

primary_tableFor gcs_source: CSV file(s), where the first row of the first fileis the header, containing unique column names. If the first row of asubsequent file is the same as the header, then it is also treatedas a header. All other rows contain values for the correspondingcolumns. Each .CSV file by itself must be 10GB or smaller, and theirtotal size must be 100GB or smaller. First three sample rows of aCSV file: "Id","First Name","Last Name","Dob","Addresses"

"1","John","Doe","1968-01-22","[{"status":"current","address":"123_First_Avenue","city":"Seattle","state":"WA","zip":"11111","numberOfYears":"1"},{"status":"previous","address":"456_Main_Street","city":"Portland","state":"OR","zip":"22222","numberOfYears":"5"}]"

"2","Jane","Doe","1980-10-16","[{"status":"current","address":"789_Any_Avenue","city":"Albany","state":"NY","zip":"33333","numberOfYears":"2"},{"status":"previous","address":"321_Main_Street","city":"Hoboken","state":"NJ","zip":"44444","numberOfYears":"3"}]}For bigquery_source: An URI of a BigQuery table. The user data sizeof the BigQuery table must be 100GB or smaller. An imported tablemust have between 2 and 1,000 columns, inclusive, and between 1000and 100,000,000 rows, inclusive. There are at most 5 import datarunning in parallel. Definitions: ML_USE = "TRAIN" | "VALIDATE" |"TEST" | "UNASSIGNED" Describes how the given example (file) shouldbe used for model training. "UNASSIGNED" can be used when user hasno preference. GCS_FILE_PATH = A path to file on GCS, e.g."gs://folder/image1.png". LABEL = A display name of an object on animage, video etc., e.g. "dog". Must be up to 32 characters long andcan consist only of ASCII Latin letters A-Z and a-z,underscores(_), and ASCII digits 0-9. For each label anAnnotationSpec is created which display_name becomes the label;AnnotationSpecs are given back in predictions. INSTANCE_ID = Apositive integer that identifies a specific instance of a labeledentity on an example. Used e.g. to track two cars on a video whilebeing able to tell apart which one is which. BOUNDING_BOX =VERTEX,VERTEX,VERTEX,VERTEX | VERTEX,,,VERTEX,, A rectangleparallel to the frame of the example (image, video). If 4 verticesare given they are connected by edges in the order provided, if 2are given they are recognized as diagonally opposite vertices of therectangle. VERTEX = COORDINATE,COORDINATE First coordinate ishorizontal (x), the second is vertical (y). COORDINATE = A float in0 to 1 range, relative to total length of image or video in givendimension. For fractions the leading non-decimal 0 can be omitted(i.e. 0.3 = .3). Point 0,0 is in top left. TIME_SEGMENT_START =TIME_OFFSET Expresses a beginning, inclusive, of a time segmentwithin an example that has a time dimension (e.g. video).TIME_SEGMENT_END = TIME_OFFSET Expresses an end, exclusive, of atime segment within an example that has a time dimension (e.g.video). TIME_OFFSET = A number of seconds as measured from the startof an example (e.g. video). Fractions are allowed, up to amicrosecond precision. "inf" is allowed, and it means the end of theexample. TEXT_SNIPPET = A content of a text snippet, UTF-8 encoded,enclosed within double quotes (""). SENTIMENT = An integer between 0and Dataset.text_sentiment_dataset_metadata.sentiment_max(inclusive). Describes the ordinal of the sentiment - higher valuemeans a more positive sentiment. All the values are completelyrelative, i.e. neither 0 needs to mean a negative or neutralsentiment nor sentiment_max needs to mean a positive one - it isjust required that 0 is the least positive sentiment in the data,and sentiment_max is the most positive one. The SENTIMENT shouldn'tbe confused with "score" or "magnitude" from the previous NaturalLanguage Sentiment Analysis API. All SENTIMENT values between 0 andsentiment_max must be represented in the imported data. Onprediction the same 0 to sentiment_max range will be used. Thedifference between neighboring sentiment values needs not to beuniform, e.g. 1 and 2 may be similar whereas the difference between2 and 3 may be huge.

Errors: If any of the provided CSV files can't be parsed or if morethan certain percent of CSV rows cannot be processed then theoperation fails and nothing is imported. Regardless of overallsuccess or failure the per-row failures, up to a certain count cap,is listed in Operation.metadata.partial_failures.

This message hasoneof_ fields (mutually exclusive fields).For each oneof, at most one member field can be set at the same time.Setting any member of the oneof automatically clears all othermembers.

.. _oneof:https://proto-plus-python.readthedocs.io/en/stable/fields.html#oneofs-mutually-exclusive-fields

ListColumnSpecsRequest

Request message forAutoMl.ListColumnSpecs.

ListColumnSpecsResponse

Response message forAutoMl.ListColumnSpecs.

ListDatasetsRequest

Request message forAutoMl.ListDatasets.

ListDatasetsResponse

Response message forAutoMl.ListDatasets.

ListModelEvaluationsRequest

Request message forAutoMl.ListModelEvaluations.

ListModelEvaluationsResponse

Response message forAutoMl.ListModelEvaluations.

ListModelsRequest

Request message forAutoMl.ListModels.

ListModelsResponse

Response message forAutoMl.ListModels.

ListTableSpecsRequest

Request message forAutoMl.ListTableSpecs.

ListTableSpecsResponse

Response message forAutoMl.ListTableSpecs.

Model

API proto representing a trained machine learning model.

This message hasoneof_ fields (mutually exclusive fields).For each oneof, at most one member field can be set at the same time.Setting any member of the oneof automatically clears all othermembers.

.. _oneof:https://proto-plus-python.readthedocs.io/en/stable/fields.html#oneofs-mutually-exclusive-fields

ModelEvaluation

Evaluation results of a model.

This message hasoneof_ fields (mutually exclusive fields).For each oneof, at most one member field can be set at the same time.Setting any member of the oneof automatically clears all othermembers.

.. _oneof:https://proto-plus-python.readthedocs.io/en/stable/fields.html#oneofs-mutually-exclusive-fields

ModelExportOutputConfig

Output configuration for ModelExport Action.

This message hasoneof_ fields (mutually exclusive fields).For each oneof, at most one member field can be set at the same time.Setting any member of the oneof automatically clears all othermembers.

.. _oneof:https://proto-plus-python.readthedocs.io/en/stable/fields.html#oneofs-mutually-exclusive-fields

NormalizedVertex

A vertex represents a 2D point in the image.The normalized vertex coordinates are between 0 to 1 fractionsrelative to the original plane (image, video). E.g. if the plane(e.g. whole image) would have size 10 x 20 then a point withnormalized coordinates (0.1, 0.3) would be at the position (1,6) on that plane.

OperationMetadata

Metadata used across all long running operations returned byAutoML API.

This message hasoneof_ fields (mutually exclusive fields).For each oneof, at most one member field can be set at the same time.Setting any member of the oneof automatically clears all othermembers.

.. _oneof:https://proto-plus-python.readthedocs.io/en/stable/fields.html#oneofs-mutually-exclusive-fields

OutputConfig

  • For Translation: CSV filetranslation.csv, with each line informat: ML_USE,GCS_FILE_PATH GCS_FILE_PATH leads to a .TSV filewhich describes examples that have given ML_USE, using thefollowing row format per line: TEXT_SNIPPET (in source language)\t TEXT_SNIPPET (in target language)

    • For Tables: Output depends on whether the dataset was importedfrom GCS or BigQuery. GCS case:

gcs_destinationmust be set. Exported are CSV file(s)tables_1.csv,tables_2.csv,...,\tables_N.csv with each having as headerline the table's column names, and all other lines contain valuesfor the header columns. BigQuery case:

bigquery_destinationpointing to a BigQuery project must be set. In the given project anew dataset will be created with name

export_data_<automl-dataset-display-name>_<timestamp-of-export-call>where will be made BigQuery-dataset-name compatible (e.g. mostspecial characters will become underscores), and timestamp will bein YYYY_MM_DDThh_mm_ss_sssZ "based on ISO-8601" format. In thatdataset a new table calledprimary_table will be created, andfilled with precisely the same data as this obtained on import.

This message hasoneof_ fields (mutually exclusive fields).For each oneof, at most one member field can be set at the same time.Setting any member of the oneof automatically clears all othermembers.

.. _oneof:https://proto-plus-python.readthedocs.io/en/stable/fields.html#oneofs-mutually-exclusive-fields

PredictRequest

Request message forPredictionService.Predict.

PredictResponse

Response message forPredictionService.Predict.

RegressionEvaluationMetrics

Metrics for regression problems.

Row

A representation of a row in a relational table.

StringStats

The data statistics of a series of STRING values.

StructStats

The data statistics of a series of STRUCT values.

StructType

StructType defines the DataType-s of aSTRUCT type.

TableSpec

A specification of a relational table. The table's schema isrepresented via its child column specs. It is pre-populated as partof ImportData by schema inference algorithm, the version of which isa required parameter of ImportData InputConfig. Note: While workingwith a table, at times the schema may be inconsistent with the datain the table (e.g. string in a FLOAT64 column). The consistencyvalidation is done upon creation of a model. Used by:

  • Tables

TablesAnnotation

Contains annotation details specific to Tables.

TablesDatasetMetadata

Metadata for a dataset used for AutoML Tables.

TablesModelColumnInfo

An information specific to given column and Tables Model, incontext of the Model and the predictions created by it.

TablesModelMetadata

Model metadata specific to AutoML Tables.

This message hasoneof_ fields (mutually exclusive fields).For each oneof, at most one member field can be set at the same time.Setting any member of the oneof automatically clears all othermembers.

.. _oneof:https://proto-plus-python.readthedocs.io/en/stable/fields.html#oneofs-mutually-exclusive-fields

TextClassificationDatasetMetadata

Dataset metadata for classification.

TextClassificationModelMetadata

Model metadata that is specific to text classification.

TextExtractionAnnotation

Annotation for identifying spans of text.

.. _oneof:https://proto-plus-python.readthedocs.io/en/stable/fields.html#oneofs-mutually-exclusive-fields

TextExtractionDatasetMetadata

Dataset metadata that is specific to text extraction

TextExtractionEvaluationMetrics

Model evaluation metrics for text extraction problems.

TextExtractionModelMetadata

Model metadata that is specific to text extraction.

TextSegment

A contiguous part of a text (string), assuming it has anUTF-8 NFC encoding.

TextSentimentAnnotation

Contains annotation details specific to text sentiment.

TextSentimentDatasetMetadata

Dataset metadata for text sentiment.

TextSentimentEvaluationMetrics

Model evaluation metrics for text sentiment problems.

TextSentimentModelMetadata

Model metadata that is specific to text sentiment.

TextSnippet

A representation of a text snippet.

TimeSegment

A time period inside of an example that has a time dimension(e.g. video).

TimestampStats

The data statistics of a series of TIMESTAMP values.

TranslationAnnotation

Annotation details specific to translation.

TranslationDatasetMetadata

Dataset metadata that is specific to translation.

TranslationEvaluationMetrics

Evaluation metrics for the dataset.

TranslationModelMetadata

Model metadata that is specific to translation.

TypeCode

TypeCode is used as a part ofDataType.

    <xref uid="google.cloud.automl.v1beta1.DataType.list_element_type">list_element_type</xref>.STRUCT (9):    Encoded as `struct`, where field values are represented    according to    <xref uid="google.cloud.automl.v1beta1.DataType.struct_type">struct_type</xref>.CATEGORY (10):    Values of this type are not further understood by AutoML,    e.g. AutoML is unable to tell the order of values (as it    could with FLOAT64), or is unable to say if one value    contains another (as it could with STRING). Encoded as    `string` (bytes should be base64-encoded, as described in    RFC 4648, section 4).

UndeployModelOperationMetadata

Details of UndeployModel operation.

UndeployModelRequest

Request message forAutoMl.UndeployModel.

UpdateColumnSpecRequest

Request message forAutoMl.UpdateColumnSpec

UpdateDatasetRequest

Request message forAutoMl.UpdateDataset

UpdateTableSpecRequest

Request message forAutoMl.UpdateTableSpec

VideoClassificationAnnotation

Contains annotation details specific to video classification.

VideoClassificationDatasetMetadata

Dataset metadata specific to video classification.All Video Classification datasets are treated as multi label.

VideoClassificationModelMetadata

Model metadata specific to video classification.

VideoObjectTrackingAnnotation

Annotation details for video object tracking.

VideoObjectTrackingDatasetMetadata

Dataset metadata specific to video object tracking.

VideoObjectTrackingEvaluationMetrics

Model evaluation metrics for video object tracking problems.Evaluates prediction quality of both labeled bounding boxes andlabeled tracks (i.e. series of bounding boxes sharing same labeland instance ID).

VideoObjectTrackingModelMetadata

Model metadata specific to video object tracking.

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2025-10-30 UTC.