- 1.122.0 (latest)
- 1.121.0
- 1.120.0
- 1.119.0
- 1.118.0
- 1.117.0
- 1.116.0
- 1.115.0
- 1.114.0
- 1.113.0
- 1.112.0
- 1.111.0
- 1.110.0
- 1.109.0
- 1.108.0
- 1.107.0
- 1.106.0
- 1.105.0
- 1.104.0
- 1.103.0
- 1.102.0
- 1.101.0
- 1.100.0
- 1.99.0
- 1.98.0
- 1.97.0
- 1.96.0
- 1.95.1
- 1.94.0
- 1.93.1
- 1.92.0
- 1.91.0
- 1.90.0
- 1.89.0
- 1.88.0
- 1.87.0
- 1.86.0
- 1.85.0
- 1.84.0
- 1.83.0
- 1.82.0
- 1.81.0
- 1.80.0
- 1.79.0
- 1.78.0
- 1.77.0
- 1.76.0
- 1.75.0
- 1.74.0
- 1.73.0
- 1.72.0
- 1.71.1
- 1.70.0
- 1.69.0
- 1.68.0
- 1.67.1
- 1.66.0
- 1.65.0
- 1.63.0
- 1.62.0
- 1.60.0
- 1.59.0
- 1.58.0
- 1.57.0
- 1.56.0
- 1.55.0
- 1.54.1
- 1.53.0
- 1.52.0
- 1.51.0
- 1.50.0
- 1.49.0
- 1.48.0
- 1.47.0
- 1.46.0
- 1.45.0
- 1.44.0
- 1.43.0
- 1.39.0
- 1.38.1
- 1.37.0
- 1.36.4
- 1.35.0
- 1.34.0
- 1.33.1
- 1.32.0
- 1.31.1
- 1.30.1
- 1.29.0
- 1.28.1
- 1.27.1
- 1.26.1
- 1.25.0
- 1.24.1
- 1.23.0
- 1.22.1
- 1.21.0
- 1.20.0
- 1.19.1
- 1.18.3
- 1.17.1
- 1.16.1
- 1.15.1
- 1.14.0
- 1.13.1
- 1.12.1
- 1.11.0
- 1.10.0
- 1.9.0
- 1.8.1
- 1.7.1
- 1.6.2
- 1.5.0
- 1.4.3
- 1.3.0
- 1.2.0
- 1.1.1
- 1.0.1
- 0.9.0
- 0.8.0
- 0.7.1
- 0.6.0
- 0.5.1
- 0.4.0
- 0.3.1
Types for Google Cloud Aiplatform V1 Schema Predict Prediction v1 API
class google.cloud.aiplatform.v1.schema.predict.prediction_v1.types.ClassificationPredictionResult(mapping=None, *, ignore_unknown_fields=False, **kwargs)
Bases:proto.message.Message
Prediction output format for Image and Text Classification.
ids()
The resource IDs of the AnnotationSpecs thathad been identified.
Type
MutableSequence[int]
display_names()
The display names of the AnnotationSpecs thathad been identified, order matches the IDs.
Type
MutableSequence[str]
confidences()
The Model’s confidences in correctness of thepredicted IDs, higher value means higherconfidence. Order matches the Ids.
Type
MutableSequence[float]
confidences(: MutableSequence[float )
display_names(: MutableSequence[str )
ids(: MutableSequence[int )
class google.cloud.aiplatform.v1.schema.predict.prediction_v1.types.ImageObjectDetectionPredictionResult(mapping=None, *, ignore_unknown_fields=False, **kwargs)
Bases:proto.message.Message
Prediction output format for Image Object Detection.
ids()
The resource IDs of the AnnotationSpecs thathad been identified, ordered by the confidencescore descendingly.
Type
MutableSequence[int]
display_names()
The display names of the AnnotationSpecs thathad been identified, order matches the IDs.
Type
MutableSequence[str]
confidences()
The Model’s confidences in correctness of thepredicted IDs, higher value means higherconfidence. Order matches the Ids.
Type
MutableSequence[float]
bboxes()
Bounding boxes, i.e. the rectangles over the image, thatpinpoint the found AnnotationSpecs. Given in order thatmatches the IDs. Each bounding box is an array of 4 numbersxMin,xMax,yMin, andyMax, which representthe extremal coordinates of the box. They are relative tothe image size, and the point 0,0 is in the top left of theimage.
Type
MutableSequence[google.protobuf.struct_pb2.ListValue]
bboxes(: MutableSequence[google.protobuf.struct_pb2.ListValue )
confidences(: MutableSequence[float )
display_names(: MutableSequence[str )
ids(: MutableSequence[int )
class google.cloud.aiplatform.v1.schema.predict.prediction_v1.types.ImageSegmentationPredictionResult(mapping=None, *, ignore_unknown_fields=False, **kwargs)
Bases:proto.message.Message
Prediction output format for Image Segmentation.
category_mask()
A PNG image where each pixel in the maskrepresents the category in which the pixel inthe original image was predicted to belong to.The size of this image will be the same as theoriginal image. The mapping between theAnntoationSpec and the color can be found inmodel’s metadata. The model will choose the mostlikely category and if none of the categoriesreach the confidence threshold, the pixel willbe marked as background.
Type
confidence_mask()
A one channel image which is encoded as an8bit lossless PNG. The size of the image will bethe same as the original image. For a specificpixel, darker color means less confidence incorrectness of the cateogry in the categoryMaskfor the corresponding pixel. Black means noconfidence and white means complete confidence.
Type
category_mask(: [str](https://python.readthedocs.io/en/latest/library/stdtypes.html#str )
confidence_mask(: [str](https://python.readthedocs.io/en/latest/library/stdtypes.html#str )
class google.cloud.aiplatform.v1.schema.predict.prediction_v1.types.TabularClassificationPredictionResult(mapping=None, *, ignore_unknown_fields=False, **kwargs)
Bases:proto.message.Message
Prediction output format for Tabular Classification.
classes()
The name of the classes being classified,contains all possible values of the targetcolumn.
Type
MutableSequence[str]
scores()
The model’s confidence in each class beingcorrect, higher value means higher confidence.The N-th score corresponds to the N-th class inclasses.
Type
MutableSequence[float]
classes(: MutableSequence[str )
scores(: MutableSequence[float )
class google.cloud.aiplatform.v1.schema.predict.prediction_v1.types.TabularRegressionPredictionResult(mapping=None, *, ignore_unknown_fields=False, **kwargs)
Bases:proto.message.Message
Prediction output format for Tabular Regression.
value()
The regression value.
Type
lower_bound()
The lower bound of the prediction interval.
Type
upper_bound()
The upper bound of the prediction interval.
Type
lower_bound(: [float](https://python.readthedocs.io/en/latest/library/functions.html#float )
upper_bound(: [float](https://python.readthedocs.io/en/latest/library/functions.html#float )
value(: [float](https://python.readthedocs.io/en/latest/library/functions.html#float )
class google.cloud.aiplatform.v1.schema.predict.prediction_v1.types.TextExtractionPredictionResult(mapping=None, *, ignore_unknown_fields=False, **kwargs)
Bases:proto.message.Message
Prediction output format for Text Extraction.
ids()
The resource IDs of the AnnotationSpecs thathad been identified, ordered by the confidencescore descendingly.
Type
MutableSequence[int]
display_names()
The display names of the AnnotationSpecs thathad been identified, order matches the IDs.
Type
MutableSequence[str]
text_segment_start_offsets()
The start offsets, inclusive, of the textsegment in which the AnnotationSpec has beenidentified. Expressed as a zero-based number ofcharacters as measured from the start of thetext snippet.
Type
MutableSequence[int]
text_segment_end_offsets()
The end offsets, inclusive, of the textsegment in which the AnnotationSpec has beenidentified. Expressed as a zero-based number ofcharacters as measured from the start of thetext snippet.
Type
MutableSequence[int]
confidences()
The Model’s confidences in correctness of thepredicted IDs, higher value means higherconfidence. Order matches the Ids.
Type
MutableSequence[float]
confidences(: MutableSequence[float )
display_names(: MutableSequence[str )
ids(: MutableSequence[int )
text_segment_end_offsets(: MutableSequence[int )
text_segment_start_offsets(: MutableSequence[int )
class google.cloud.aiplatform.v1.schema.predict.prediction_v1.types.TextSentimentPredictionResult(mapping=None, *, ignore_unknown_fields=False, **kwargs)
Bases:proto.message.Message
Prediction output format for Text Sentiment
sentiment()
The integer sentiment labels between 0(inclusive) and sentimentMax label (inclusive),while 0 maps to the least positive sentiment andsentimentMax maps to the most positive one. Thehigher the score is, the more positive thesentiment in the text snippet is. Note:sentimentMax is an integer value between 1(inclusive) and 10 (inclusive).
Type
sentiment(: [int](https://python.readthedocs.io/en/latest/library/functions.html#int )
class google.cloud.aiplatform.v1.schema.predict.prediction_v1.types.VideoActionRecognitionPredictionResult(mapping=None, *, ignore_unknown_fields=False, **kwargs)
Bases:proto.message.Message
Prediction output format for Video Action Recognition.
id()
The resource ID of the AnnotationSpec thathad been identified.
Type
display_name()
The display name of the AnnotationSpec thathad been identified.
Type
time_segment_start()
The beginning, inclusive, of the video’s timesegment in which the AnnotationSpec has beenidentified. Expressed as a number of seconds asmeasured from the start of the video, withfractions up to a microsecond precision, andwith “s” appended at the end.
time_segment_end()
The end, exclusive, of the video’s timesegment in which the AnnotationSpec has beenidentified. Expressed as a number of seconds asmeasured from the start of the video, withfractions up to a microsecond precision, andwith “s” appended at the end.
confidence()
The Model’s confidence in correction of thisprediction, higher value means higherconfidence.
confidence(: [google.protobuf.wrappers_pb2.FloatValue](https://googleapis.dev/python/protobuf/latest/google/protobuf/wrappers_pb2.html#google.protobuf.wrappers_pb2.FloatValue )
display_name(: [str](https://python.readthedocs.io/en/latest/library/stdtypes.html#str )
id(: [str](https://python.readthedocs.io/en/latest/library/stdtypes.html#str )
time_segment_end(: [google.protobuf.duration_pb2.Duration](https://googleapis.dev/python/protobuf/latest/google/protobuf/duration_pb2.html#google.protobuf.duration_pb2.Duration )
time_segment_start(: [google.protobuf.duration_pb2.Duration](https://googleapis.dev/python/protobuf/latest/google/protobuf/duration_pb2.html#google.protobuf.duration_pb2.Duration )
class google.cloud.aiplatform.v1.schema.predict.prediction_v1.types.VideoClassificationPredictionResult(mapping=None, *, ignore_unknown_fields=False, **kwargs)
Bases:proto.message.Message
Prediction output format for Video Classification.
id()
The resource ID of the AnnotationSpec thathad been identified.
Type
display_name()
The display name of the AnnotationSpec thathad been identified.
Type
type_()
The type of the prediction. The requestedtypes can be configured via parameters. Thiswill be one of
- segment-classification
- shot-classification
one-sec-interval-classification
Type
time_segment_start()
The beginning, inclusive, of the video’s timesegment in which the AnnotationSpec has beenidentified. Expressed as a number of seconds asmeasured from the start of the video, withfractions up to a microsecond precision, andwith “s” appended at the end. Note that for‘segment-classification’ prediction type, thisequals the original ‘timeSegmentStart’ from theinput instance, for other types it is the startof a shot or a 1 second interval respectively.
time_segment_end()
The end, exclusive, of the video’s timesegment in which the AnnotationSpec has beenidentified. Expressed as a number of seconds asmeasured from the start of the video, withfractions up to a microsecond precision, andwith “s” appended at the end. Note that for‘segment-classification’ prediction type, thisequals the original ‘timeSegmentEnd’ from theinput instance, for other types it is the end ofa shot or a 1 second interval respectively.
confidence()
The Model’s confidence in correction of thisprediction, higher value means higherconfidence.
confidence(: [google.protobuf.wrappers_pb2.FloatValue](https://googleapis.dev/python/protobuf/latest/google/protobuf/wrappers_pb2.html#google.protobuf.wrappers_pb2.FloatValue )
display_name(: [str](https://python.readthedocs.io/en/latest/library/stdtypes.html#str )
id(: [str](https://python.readthedocs.io/en/latest/library/stdtypes.html#str )
time_segment_end(: [google.protobuf.duration_pb2.Duration](https://googleapis.dev/python/protobuf/latest/google/protobuf/duration_pb2.html#google.protobuf.duration_pb2.Duration )
time_segment_start(: [google.protobuf.duration_pb2.Duration](https://googleapis.dev/python/protobuf/latest/google/protobuf/duration_pb2.html#google.protobuf.duration_pb2.Duration )
type_(: [str](https://python.readthedocs.io/en/latest/library/stdtypes.html#str )
class google.cloud.aiplatform.v1.schema.predict.prediction_v1.types.VideoObjectTrackingPredictionResult(mapping=None, *, ignore_unknown_fields=False, **kwargs)
Bases:proto.message.Message
Prediction output format for Video Object Tracking.
id()
The resource ID of the AnnotationSpec thathad been identified.
Type
display_name()
The display name of the AnnotationSpec thathad been identified.
Type
time_segment_start()
The beginning, inclusive, of the video’s timesegment in which the object instance has beendetected. Expressed as a number of seconds asmeasured from the start of the video, withfractions up to a microsecond precision, andwith “s” appended at the end.
time_segment_end()
The end, inclusive, of the video’s timesegment in which the object instance has beendetected. Expressed as a number of seconds asmeasured from the start of the video, withfractions up to a microsecond precision, andwith “s” appended at the end.
confidence()
The Model’s confidence in correction of thisprediction, higher value means higherconfidence.
frames()
All of the frames of the video in which asingle object instance has been detected. Thebounding boxes in the frames identify the sameobject.
Type
MutableSequence[google.cloud.aiplatform.v1.schema.predict.prediction_v1.types.VideoObjectTrackingPredictionResult.Frame]
class Frame(mapping=None, *, ignore_unknown_fields=False, **kwargs)
Bases:proto.message.Message
The fieldsxMin,xMax,yMin, andyMax refer to abounding box, i.e. the rectangle over the video frame pinpointingthe found AnnotationSpec. The coordinates are relative to the framesize, and the point 0,0 is in the top left of the frame.
time_offset()
A time (frame) of a video in which the objecthas been detected. Expressed as a number ofseconds as measured from the start of the video,with fractions up to a microsecond precision,and with “s” appended at the end.
x_min()
The leftmost coordinate of the bounding box.
x_max()
The rightmost coordinate of the bounding box.
y_min()
The topmost coordinate of the bounding box.
y_max()
The bottommost coordinate of the boundingbox.
time_offset(: [google.protobuf.duration_pb2.Duration](https://googleapis.dev/python/protobuf/latest/google/protobuf/duration_pb2.html#google.protobuf.duration_pb2.Duration )
x_max(: [google.protobuf.wrappers_pb2.FloatValue](https://googleapis.dev/python/protobuf/latest/google/protobuf/wrappers_pb2.html#google.protobuf.wrappers_pb2.FloatValue )
x_min(: [google.protobuf.wrappers_pb2.FloatValue](https://googleapis.dev/python/protobuf/latest/google/protobuf/wrappers_pb2.html#google.protobuf.wrappers_pb2.FloatValue )
y_max(: [google.protobuf.wrappers_pb2.FloatValue](https://googleapis.dev/python/protobuf/latest/google/protobuf/wrappers_pb2.html#google.protobuf.wrappers_pb2.FloatValue )
y_min(: [google.protobuf.wrappers_pb2.FloatValue](https://googleapis.dev/python/protobuf/latest/google/protobuf/wrappers_pb2.html#google.protobuf.wrappers_pb2.FloatValue )
confidence(: wrappers_pb2.FloatValu )
display_name(: [str](https://python.readthedocs.io/en/latest/library/stdtypes.html#str )
frames(: MutableSequence[[Frame](../prediction_v1/types.md#google.cloud.aiplatform.v1.schema.predict.prediction_v1.types.VideoObjectTrackingPredictionResult.Frame)_ )
id(: [str](https://python.readthedocs.io/en/latest/library/stdtypes.html#str )
time_segment_end(: duration_pb2.Duratio )
time_segment_start(: duration_pb2.Duratio )
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-10-30 UTC.