Vertex AI GenAI API Stay organized with collections Save and categorize content based on your preferences.
Service: aiplatform.googleapis.com
To call this service, we recommend that you use the Google-providedclient libraries. If your application needs to use your own libraries to call this service, use the following information when you make the API requests.
Discovery document
ADiscovery Document is a machine-readable specification for describing and consuming REST APIs. It is used to build client libraries, IDE plugins, and other tools that interact with Google APIs. One service may provide multiple discovery documents. This service provides the following discovery documents:
Service endpoint
Aservice endpoint is a base URL that specifies the network address of an API service. One service might have multiple service endpoints. This service has the following service endpoint and all URIs below are relative to this service endpoint:
https://aiplatform.googleapis.com
REST Resource:v1.media
| Methods | |
|---|---|
upload | POST /v1/{parent}/ragFiles:uploadPOST /upload/v1/{parent}/ragFiles:uploadUpload a file into a RagCorpus. |
REST Resource:v1.projects
| Methods | |
|---|---|
getCacheConfig | GET /v1/{name}Gets a GenAI cache config. |
updateCacheConfig | PATCH /v1/{cacheConfig.name}Updates a cache config. |
REST Resource:v1.projects.locations
| Methods | |
|---|---|
augmentPrompt | POST /v1/{parent}:augmentPromptGiven an input prompt, it returns augmented prompt from vertex rag store to guide LLM towards generating grounded responses. |
corroborateContent | POST /v1/{parent}:corroborateContentGiven an input text, it returns a score that evaluates the factuality of the text. |
evaluateDataset | POST /v1/{location}:evaluateDatasetEvaluates a dataset based on a set of given metrics. |
evaluateInstances | POST /v1/{location}:evaluateInstancesEvaluates instances based on a given metric. |
generateInstanceRubrics | POST /v1/{location}:generateInstanceRubricsGenerates rubrics for a given prompt. |
generateSyntheticData | POST /v1/{location}:generateSyntheticDataGenerates synthetic data based on the provided configuration. |
getRagEngineConfig | GET /v1/{name}Gets a RagEngineConfig. |
retrieveContexts | POST /v1/{parent}:retrieveContextsRetrieves relevant contexts for a query. |
updateRagEngineConfig | PATCH /v1/{ragEngineConfig.name}Updates a RagEngineConfig. |
REST Resource:v1.projects.locations.cachedContents
| Methods | |
|---|---|
create | POST /v1/{parent}/cachedContentsCreates cached content, this call will initialize the cached content in the data storage, and users need to pay for the cache data storage. |
delete | DELETE /v1/{name}Deletes cached content |
get | GET /v1/{name}Gets cached content configurations |
list | GET /v1/{parent}/cachedContentsLists cached contents in a project |
patch | PATCH /v1/{cachedContent.name}Updates cached content configurations |
REST Resource:v1.projects.locations.endpoints
| Methods | |
|---|---|
computeTokens | POST /v1/{endpoint}:computeTokensReturn a list of tokens based on the input text. |
countTokens | POST /v1/{endpoint}:countTokensPerform a token counting. |
fetchPredictOperation | POST /v1/{endpoint}:fetchPredictOperationFetch an asynchronous online prediction operation. |
generateContent | POST /v1/{model}:generateContentGenerate content with multimodal inputs. |
predict | POST /v1/{endpoint}:predictRequest message for running inference on Google's generative AI models on Vertex AI. |
predictLongRunning | POST /v1/{endpoint}:predictLongRunning |
rawPredict | POST /v1/{endpoint}:rawPredictPerform an online prediction with an arbitrary HTTP payload. |
serverStreamingPredict | POST /v1/{endpoint}:serverStreamingPredictPerform a server-side streaming online prediction request for Vertex LLM streaming. |
streamGenerateContent | POST /v1/{model}:streamGenerateContentGenerate content with multimodal inputs with streaming support. |
streamRawPredict | POST /v1/{endpoint}:streamRawPredictPerform a streaming online prediction with an arbitrary HTTP payload. |
REST Resource:v1.projects.locations.endpoints.chat
| Methods | |
|---|---|
completions | POST /v1/{endpoint}/chat/completionsExposes an OpenAI-compatible endpoint for chat completions. |
REST Resource:v1.projects.locations.endpoints.deployedModels.invoke
| Methods | |
|---|---|
invoke | POST /v1/{endpoint}/deployedModels/{deployedModelId}/invoke/**Forwards arbitrary HTTP requests for both streaming and non-streaming cases. |
REST Resource:v1.projects.locations.endpoints.google.science
| Methods | |
|---|---|
inference | POST /v1/{endpoint}/science/inferenceForwards arbitrary HTTP requests for both streaming and non-streaming cases. |
REST Resource:v1.projects.locations.endpoints.invoke
| Methods | |
|---|---|
invoke | POST /v1/{endpoint}/invoke/**Forwards arbitrary HTTP requests for both streaming and non-streaming cases. |
REST Resource:v1.projects.locations.endpoints.openapi
| Methods | |
|---|---|
embeddings | POST /v1/{endpoint}/embeddingsForwards arbitrary HTTP requests for both streaming and non-streaming cases. |
REST Resource:v1.projects.locations.evaluationItems
| Methods | |
|---|---|
create | POST /v1/{parent}/evaluationItemsCreates an Evaluation Item. |
delete | DELETE /v1/{name}Deletes an Evaluation Item. |
get | GET /v1/{name}Gets an Evaluation Item. |
list | GET /v1/{parent}/evaluationItemsLists Evaluation Items. |
REST Resource:v1.projects.locations.evaluationRuns
| Methods | |
|---|---|
cancel | POST /v1/{name}:cancelCancels an Evaluation Run. |
create | POST /v1/{parent}/evaluationRunsCreates an Evaluation Run. |
delete | DELETE /v1/{name}Deletes an Evaluation Run. |
get | GET /v1/{name}Gets an Evaluation Run. |
list | GET /v1/{parent}/evaluationRunsLists Evaluation Runs. |
REST Resource:v1.projects.locations.evaluationSets
| Methods | |
|---|---|
create | POST /v1/{parent}/evaluationSetsCreates an Evaluation Set. |
delete | DELETE /v1/{name}Deletes an Evaluation Set. |
get | GET /v1/{name}Gets an Evaluation Set. |
list | GET /v1/{parent}/evaluationSetsLists Evaluation Sets. |
patch | PATCH /v1/{evaluationSet.name}Updates an Evaluation Set. |
REST Resource:v1.projects.locations.models
| Methods | |
|---|---|
getIamPolicy | POST /v1/{resource}:getIamPolicyGets the access control policy for a resource. |
setIamPolicy | POST /v1/{resource}:setIamPolicySets the access control policy on the specified resource. |
testIamPermissions | POST /v1/{resource}:testIamPermissionsReturns permissions that a caller has on the specified resource. |
REST Resource:v1.projects.locations.operations
| Methods | |
|---|---|
cancel | POST /v1/{name}:cancelStarts asynchronous cancellation on a long-running operation. |
delete | DELETE /v1/{name}Deletes a long-running operation. |
get | GET /v1/{name}Gets the latest state of a long-running operation. |
list | GET /v1/{name}/operationsLists operations that match the specified filter in the request. |
wait | POST /v1/{name}:waitWaits until the specified long-running operation is done or reaches at most a specified timeout, returning the latest state. |
REST Resource:v1.projects.locations.publishers.models
| Methods | |
|---|---|
computeTokens | POST /v1/{endpoint}:computeTokensReturn a list of tokens based on the input text. |
countTokens | POST /v1/{endpoint}:countTokensPerform a token counting. |
embedContent | POST /v1/{model}:embedContentEmbed content with multimodal inputs. |
fetchPredictOperation | POST /v1/{endpoint}:fetchPredictOperationFetch an asynchronous online prediction operation. |
generateContent | POST /v1/{model}:generateContentGenerate content with multimodal inputs. |
predict | POST /v1/{endpoint}:predictRequest message for running inference on Google's generative AI models on Vertex AI. |
predictLongRunning | POST /v1/{endpoint}:predictLongRunning |
rawPredict | POST /v1/{endpoint}:rawPredictPerform an online prediction with an arbitrary HTTP payload. |
serverStreamingPredict | POST /v1/{endpoint}:serverStreamingPredictPerform a server-side streaming online prediction request for Vertex LLM streaming. |
streamGenerateContent | POST /v1/{model}:streamGenerateContentGenerate content with multimodal inputs with streaming support. |
streamRawPredict | POST /v1/{endpoint}:streamRawPredictPerform a streaming online prediction with an arbitrary HTTP payload. |
REST Resource:v1.projects.locations.ragCorpora
| Methods | |
|---|---|
create | POST /v1/{parent}/ragCorporaCreates a RagCorpus. |
delete | DELETE /v1/{name}Deletes a RagCorpus. |
get | GET /v1/{name}Gets a RagCorpus. |
list | GET /v1/{parent}/ragCorporaLists RagCorpora in a Location. |
patch | PATCH /v1/{ragCorpus.name}Updates a RagCorpus. |
REST Resource:v1.projects.locations.ragCorpora.ragFiles
| Methods | |
|---|---|
delete | DELETE /v1/{name}Deletes a RagFile. |
get | GET /v1/{name}Gets a RagFile. |
import | POST /v1/{parent}/ragFiles:importImport files from Google Cloud Storage or Google Drive into a RagCorpus. |
list | GET /v1/{parent}/ragFilesLists RagFiles in a RagCorpus. |
REST Resource:v1.projects.locations.reasoningEngines
| Methods | |
|---|---|
create | POST /v1/{parent}/reasoningEnginesCreates a reasoning engine. |
delete | DELETE /v1/{name}Deletes a reasoning engine. |
get | GET /v1/{name}Gets a reasoning engine. |
list | GET /v1/{parent}/reasoningEnginesLists reasoning engines in a location. |
patch | PATCH /v1/{reasoningEngine.name}Updates a reasoning engine. |
query | POST /v1/{name}:queryQueries using a reasoning engine. |
streamQuery | POST /v1/{name}:streamQueryStreams queries using a reasoning engine. |
REST Resource:v1.projects.locations.tuningJobs
| Methods | |
|---|---|
cancel | POST /v1/{name}:cancelCancels a TuningJob. |
create | POST /v1/{parent}/tuningJobsCreates a TuningJob. |
get | GET /v1/{name}Gets a TuningJob. |
list | GET /v1/{parent}/tuningJobsLists TuningJobs in a Location. |
rebaseTunedModel | POST /v1/{parent}/tuningJobs:rebaseTunedModelRebase a TunedModel. |
REST Resource:v1beta1.media
| Methods | |
|---|---|
upload | POST /v1beta1/{parent}/ragFiles:uploadPOST /upload/v1beta1/{parent}/ragFiles:uploadUpload a file into a RagCorpus. |
REST Resource:v1beta1.projects
| Methods | |
|---|---|
getCacheConfig | GET /v1beta1/{name}Gets a GenAI cache config. |
updateCacheConfig | PATCH /v1beta1/{cacheConfig.name}Updates a cache config. |
REST Resource:v1beta1.projects.locations
| Methods | |
|---|---|
augmentPrompt | POST /v1beta1/{parent}:augmentPromptGiven an input prompt, it returns augmented prompt from vertex rag store to guide LLM towards generating grounded responses. |
corroborateContent | POST /v1beta1/{parent}:corroborateContentGiven an input text, it returns a score that evaluates the factuality of the text. |
evaluateDataset | POST /v1beta1/{location}:evaluateDatasetEvaluates a dataset based on a set of given metrics. |
evaluateInstances | POST /v1beta1/{location}:evaluateInstancesEvaluates instances based on a given metric. |
generateInstanceRubrics | POST /v1beta1/{location}:generateInstanceRubricsGenerates rubrics for a given prompt. |
generateSyntheticData | POST /v1beta1/{location}:generateSyntheticDataGenerates synthetic data based on the provided configuration. |
getRagEngineConfig | GET /v1beta1/{name}Gets a RagEngineConfig. |
retrieveContexts | POST /v1beta1/{parent}:retrieveContextsRetrieves relevant contexts for a query. |
updateRagEngineConfig | PATCH /v1beta1/{ragEngineConfig.name}Updates a RagEngineConfig. |
REST Resource:v1beta1.projects.locations.cachedContents
| Methods | |
|---|---|
create | POST /v1beta1/{parent}/cachedContentsCreates cached content, this call will initialize the cached content in the data storage, and users need to pay for the cache data storage. |
delete | DELETE /v1beta1/{name}Deletes cached content |
get | GET /v1beta1/{name}Gets cached content configurations |
list | GET /v1beta1/{parent}/cachedContentsLists cached contents in a project |
patch | PATCH /v1beta1/{cachedContent.name}Updates cached content configurations |
REST Resource:v1beta1.projects.locations.endpoints
| Methods | |
|---|---|
computeTokens | POST /v1beta1/{endpoint}:computeTokensReturn a list of tokens based on the input text. |
countTokens | POST /v1beta1/{endpoint}:countTokensPerform a token counting. |
fetchPredictOperation | POST /v1beta1/{endpoint}:fetchPredictOperationFetch an asynchronous online prediction operation. |
generateContent | POST /v1beta1/{model}:generateContentGenerate content with multimodal inputs. |
getIamPolicy | POST /v1beta1/{resource}:getIamPolicyGets the access control policy for a resource. |
predict | POST /v1beta1/{endpoint}:predictRequest message for running inference on Google's generative AI models on Vertex AI. |
predictLongRunning | POST /v1beta1/{endpoint}:predictLongRunning |
rawPredict | POST /v1beta1/{endpoint}:rawPredictPerform an online prediction with an arbitrary HTTP payload. |
serverStreamingPredict | POST /v1beta1/{endpoint}:serverStreamingPredictPerform a server-side streaming online prediction request for Vertex LLM streaming. |
setIamPolicy | POST /v1beta1/{resource}:setIamPolicySets the access control policy on the specified resource. |
streamGenerateContent | POST /v1beta1/{model}:streamGenerateContentGenerate content with multimodal inputs with streaming support. |
streamRawPredict | POST /v1beta1/{endpoint}:streamRawPredictPerform a streaming online prediction with an arbitrary HTTP payload. |
testIamPermissions | POST /v1beta1/{resource}:testIamPermissionsReturns permissions that a caller has on the specified resource. |
REST Resource:v1beta1.projects.locations.endpoints.chat
| Methods | |
|---|---|
completions | POST /v1beta1/{endpoint}/chat/completionsExposes an OpenAI-compatible endpoint for chat completions. |
REST Resource:v1beta1.projects.locations.endpoints.deployedModels.invoke
| Methods | |
|---|---|
invoke | POST /v1beta1/{endpoint}/deployedModels/{deployedModelId}/invoke/**Forwards arbitrary HTTP requests for both streaming and non-streaming cases. |
REST Resource:v1beta1.projects.locations.endpoints.google.science
| Methods | |
|---|---|
inference | POST /v1beta1/{endpoint}/science/inferenceForwards arbitrary HTTP requests for both streaming and non-streaming cases. |
REST Resource:v1beta1.projects.locations.endpoints.invoke
| Methods | |
|---|---|
invoke | POST /v1beta1/{endpoint}/invoke/**Forwards arbitrary HTTP requests for both streaming and non-streaming cases. |
REST Resource:v1beta1.projects.locations.endpoints.openapi
| Methods | |
|---|---|
embeddings | POST /v1beta1/{endpoint}/embeddingsForwards arbitrary HTTP requests for both streaming and non-streaming cases. |
REST Resource:v1beta1.projects.locations.evaluationItems
| Methods | |
|---|---|
create | POST /v1beta1/{parent}/evaluationItemsCreates an Evaluation Item. |
delete | DELETE /v1beta1/{name}Deletes an Evaluation Item. |
get | GET /v1beta1/{name}Gets an Evaluation Item. |
list | GET /v1beta1/{parent}/evaluationItemsLists Evaluation Items. |
REST Resource:v1beta1.projects.locations.evaluationRuns
| Methods | |
|---|---|
cancel | POST /v1beta1/{name}:cancelCancels an Evaluation Run. |
create | POST /v1beta1/{parent}/evaluationRunsCreates an Evaluation Run. |
delete | DELETE /v1beta1/{name}Deletes an Evaluation Run. |
get | GET /v1beta1/{name}Gets an Evaluation Run. |
list | GET /v1beta1/{parent}/evaluationRunsLists Evaluation Runs. |
REST Resource:v1beta1.projects.locations.evaluationSets
| Methods | |
|---|---|
create | POST /v1beta1/{parent}/evaluationSetsCreates an Evaluation Set. |
delete | DELETE /v1beta1/{name}Deletes an Evaluation Set. |
get | GET /v1beta1/{name}Gets an Evaluation Set. |
list | GET /v1beta1/{parent}/evaluationSetsLists Evaluation Sets. |
patch | PATCH /v1beta1/{evaluationSet.name}Updates an Evaluation Set. |
REST Resource:v1beta1.projects.locations.extensions
| Methods | |
|---|---|
delete | DELETE /v1beta1/{name}Deletes an Extension. |
execute | POST /v1beta1/{name}:executeExecutes the request against a given extension. |
get | GET /v1beta1/{name}Gets an Extension. |
import | POST /v1beta1/{parent}/extensions:importImports an Extension. |
list | GET /v1beta1/{parent}/extensionsLists Extensions in a location. |
patch | PATCH /v1beta1/{extension.name}Updates an Extension. |
query | POST /v1beta1/{name}:queryQueries an extension with a default controller. |
REST Resource:v1beta1.projects.locations.models
| Methods | |
|---|---|
getIamPolicy | POST /v1beta1/{resource}:getIamPolicyGets the access control policy for a resource. |
setIamPolicy | POST /v1beta1/{resource}:setIamPolicySets the access control policy on the specified resource. |
testIamPermissions | POST /v1beta1/{resource}:testIamPermissionsReturns permissions that a caller has on the specified resource. |
REST Resource:v1beta1.projects.locations.operations
| Methods | |
|---|---|
cancel | POST /v1beta1/{name}:cancelStarts asynchronous cancellation on a long-running operation. |
delete | DELETE /v1beta1/{name}Deletes a long-running operation. |
get | GET /v1beta1/{name}Gets the latest state of a long-running operation. |
list | GET /v1beta1/{name}/operationsLists operations that match the specified filter in the request. |
wait | POST /v1beta1/{name}:waitWaits until the specified long-running operation is done or reaches at most a specified timeout, returning the latest state. |
REST Resource:v1beta1.projects.locations.publishers
| Methods | |
|---|---|
getIamPolicy | POST /v1beta1/{resource}:getIamPolicyGets the access control policy for a resource. |
REST Resource:v1beta1.projects.locations.publishers.models
| Methods | |
|---|---|
computeTokens | POST /v1beta1/{endpoint}:computeTokensReturn a list of tokens based on the input text. |
countTokens | POST /v1beta1/{endpoint}:countTokensPerform a token counting. |
embedContent | POST /v1beta1/{model}:embedContentEmbed content with multimodal inputs. |
fetchPredictOperation | POST /v1beta1/{endpoint}:fetchPredictOperationFetch an asynchronous online prediction operation. |
generateContent | POST /v1beta1/{model}:generateContentGenerate content with multimodal inputs. |
getIamPolicy | POST /v1beta1/{resource}:getIamPolicyGets the access control policy for a resource. |
predict | POST /v1beta1/{endpoint}:predictRequest message for running inference on Google's generative AI models on Vertex AI. |
predictLongRunning | POST /v1beta1/{endpoint}:predictLongRunning |
rawPredict | POST /v1beta1/{endpoint}:rawPredictPerform an online prediction with an arbitrary HTTP payload. |
serverStreamingPredict | POST /v1beta1/{endpoint}:serverStreamingPredictPerform a server-side streaming online prediction request for Vertex LLM streaming. |
streamGenerateContent | POST /v1beta1/{model}:streamGenerateContentGenerate content with multimodal inputs with streaming support. |
streamRawPredict | POST /v1beta1/{endpoint}:streamRawPredictPerform a streaming online prediction with an arbitrary HTTP payload. |
REST Resource:v1beta1.projects.locations.ragCorpora
| Methods | |
|---|---|
create | POST /v1beta1/{parent}/ragCorporaCreates a RagCorpus. |
delete | DELETE /v1beta1/{name}Deletes a RagCorpus. |
get | GET /v1beta1/{name}Gets a RagCorpus. |
list | GET /v1beta1/{parent}/ragCorporaLists RagCorpora in a Location. |
patch | PATCH /v1beta1/{ragCorpus.name}Updates a RagCorpus. |
REST Resource:v1beta1.projects.locations.ragCorpora.ragFiles
| Methods | |
|---|---|
delete | DELETE /v1beta1/{name}Deletes a RagFile. |
get | GET /v1beta1/{name}Gets a RagFile. |
import | POST /v1beta1/{parent}/ragFiles:importImport files from Google Cloud Storage or Google Drive into a RagCorpus. |
list | GET /v1beta1/{parent}/ragFilesLists RagFiles in a RagCorpus. |
REST Resource:v1beta1.projects.locations.reasoningEngines
| Methods | |
|---|---|
create | POST /v1beta1/{parent}/reasoningEnginesCreates a reasoning engine. |
delete | DELETE /v1beta1/{name}Deletes a reasoning engine. |
get | GET /v1beta1/{name}Gets a reasoning engine. |
list | GET /v1beta1/{parent}/reasoningEnginesLists reasoning engines in a location. |
patch | PATCH /v1beta1/{reasoningEngine.name}Updates a reasoning engine. |
query | POST /v1beta1/{name}:queryQueries using a reasoning engine. |
streamQuery | POST /v1beta1/{name}:streamQueryStreams queries using a reasoning engine. |
REST Resource:v1beta1.projects.locations.reasoningEngines.a2a.v1
| Methods | |
|---|---|
card | GET /v1beta1/{name}/a2a/{a2aEndpoint}Get request for reasoning engine instance via the A2A get protocol apis. |
REST Resource:v1beta1.projects.locations.reasoningEngines.a2a.v1.message
| Methods | |
|---|---|
send | POST /v1beta1/{name}/a2a/{a2aEndpoint}:sendSend post request for reasoning engine instance via the A2A post protocol apis. |
stream | POST /v1beta1/{name}/a2a/{a2aEndpoint}:streamStreams queries using a reasoning engine instance via the A2A streaming protocol apis. |
REST Resource:v1beta1.projects.locations.reasoningEngines.a2a.v1.tasks
| Methods | |
|---|---|
a2aGetReasoningEngine | GET /v1beta1/{name}/a2a/{a2aEndpoint}Get request for reasoning engine instance via the A2A get protocol apis. |
cancel | POST /v1beta1/{name}/a2a/{a2aEndpoint}:cancelSend post request for reasoning engine instance via the A2A post protocol apis. |
pushNotificationConfigs | GET /v1beta1/{name}/a2a/{a2aEndpoint}Get request for reasoning engine instance via the A2A get protocol apis. |
subscribe | GET /v1beta1/{name}/a2a/{a2aEndpoint}:subscribeStream get request for reasoning engine instance via the A2A stream get protocol apis. |
REST Resource:v1beta1.projects.locations.reasoningEngines.a2a.v1.tasks.pushNotificationConfigs
| Methods | |
|---|---|
a2aGetReasoningEngine | GET /v1beta1/{name}/a2a/{a2aEndpoint}Get request for reasoning engine instance via the A2A get protocol apis. |
REST Resource:v1beta1.projects.locations.reasoningEngines.memories
| Methods | |
|---|---|
create | POST /v1beta1/{parent}/memoriesCreate a Memory. |
delete | DELETE /v1beta1/{name}Delete a Memory. |
generate | POST /v1beta1/{parent}/memories:generateGenerate memories. |
get | GET /v1beta1/{name}Get a Memory. |
list | GET /v1beta1/{parent}/memoriesList Memories. |
patch | PATCH /v1beta1/{memory.name}Update a Memory. |
purge | POST /v1beta1/{parent}/memories:purgePurge memories. |
retrieve | POST /v1beta1/{parent}/memories:retrieveRetrieve memories. |
rollback | POST /v1beta1/{name}:rollbackRollback Memory to a specific revision. |
REST Resource:v1beta1.projects.locations.reasoningEngines.memories.revisions
| Methods | |
|---|---|
get | GET /v1beta1/{name}Get a Memory Revision. |
list | GET /v1beta1/{parent}/revisionsList Memory Revisions for a Memory. |
REST Resource:v1beta1.projects.locations.reasoningEngines.sessions
| Methods | |
|---|---|
appendEvent | POST /v1beta1/{name}:appendEventAppends an event to a given session. |
create | POST /v1beta1/{parent}/sessionsCreates a new Session. |
delete | DELETE /v1beta1/{name}Deletes details of the specific Session. |
get | GET /v1beta1/{name}Gets details of the specific Session. |
list | GET /v1beta1/{parent}/sessionsLists Sessions in a given reasoning engine. |
patch | PATCH /v1beta1/{session.name}Updates the specific Session. |
REST Resource:v1beta1.projects.locations.reasoningEngines.sessions.events
| Methods | |
|---|---|
list | GET /v1beta1/{parent}/eventsLists Events in a given session. |
REST Resource:v1beta1.projects.locations.tuningJobs
| Methods | |
|---|---|
cancel | POST /v1beta1/{name}:cancelCancels a TuningJob. |
create | POST /v1beta1/{parent}/tuningJobsCreates a TuningJob. |
get | GET /v1beta1/{name}Gets a TuningJob. |
list | GET /v1beta1/{parent}/tuningJobsLists TuningJobs in a Location. |
optimizePrompt | POST /v1beta1/{parent}/tuningJobs:optimizePromptOptimizes a prompt. |
rebaseTunedModel | POST /v1beta1/{parent}/tuningJobs:rebaseTunedModelRebase a TunedModel. |
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-11-18 UTC.