Manually log data to an experiment run Stay organized with collections Save and categorize content based on your preferences.
For logging purposes, use the Vertex AI SDK for Python.
Supported metrics and parameters:
Vertex AI SDK for Python
Note: When the optionalresume parameter is specified asTRUE, the previously started run resumes. When not specified,resume defaults toFALSE and a new run is created.
The following sample uses theinit method, from theaiplatform functions.
Summary metrics
Summary metrics are single value scalar metrics stored next to time series metrics and represent a final summary of anexperiment run.
One example use case is early stopping where a patience configuration allowscontinued training but the candidate model is restored from an earlier stepand the metrics calculated for the model at that step would be represented asa summary metric because the latest time series metric is not representativeof the restored model. Thelog_metrics API for summary metricsis used for this purpose.
Python
fromtypingimportDictfromgoogle.cloudimportaiplatformdeflog_metrics_sample(experiment_name:str,run_name:str,metrics:Dict[str,float],project:str,location:str,):aiplatform.init(experiment=experiment_name,project=project,location=location)aiplatform.start_run(run=run_name)aiplatform.log_metrics(metrics)experiment_name: Provide a name for your experiment. You can find your list of experiments in the Google Cloud console by selectingExperiments in the section nav.run_name: Specify a run name (seestart_run).metric: Metrics key-value pairs. For example:{'learning_rate': 0.1}project: . You can find these in the Google Cloud consolewelcome page.location: SeeList of available locations
Time series metrics
To log time series metrics, Vertex AI Experiments requires a backing Vertex AI TensorBoard instance.
Assign backing Vertex AI TensorBoard resource for Time Series MetricLogging.
All metrics logged throughlog_time_series_metrics are stored astime series metrics. Vertex AI TensorBoard is the backing time series metric store.
Theexperiment_tensorboard can be set at both theexperiment andexperiment run levels. Setting theexperiment_tensorboard at the runlevel overrides the setting at the experiment level. Once theexperiment_tensorboard is set in a run, the run'sexperiment_tensorboard can't be changed.
- Set
experiment_tensorboardat experiment level:aiplatform.
init(experiment='my-experiment',experiment_tensorboard='projects/.../tensorboard/my-tb-resource') - Set
experiment_tensorboardat run level:Note: Overrides setting at experiment level.aiplatform.
start_run(run_name='my-other-run',tensorboard='projects/.../.../other-resource')aiplatform.log_time_series_metrics(...)
Python
fromtypingimportDict,Optionalfromgoogle.cloudimportaiplatformfromgoogle.protobufimporttimestamp_pb2deflog_time_series_metrics_sample(experiment_name:str,run_name:str,metrics:Dict[str,float],step:Optional[int],wall_time:Optional[timestamp_pb2.Timestamp],project:str,location:str,):aiplatform.init(experiment=experiment_name,project=project,location=location)aiplatform.start_run(run=run_name,resume=True)aiplatform.log_time_series_metrics(metrics=metrics,step=step,wall_time=wall_time)experiment_name: Provide the name of your experiment. You can find your list of experiments in the Google Cloud console by selectingExperiments in the section nav.run_name: Specify a run name (seestart_run).metrics: Dictionary of where keys are metric names and values are metric values.step: Optional. Step index of this data point within the run.wall_time: Optional. Wall clock timestamp when this data point is generated by the end user. If not provided,wall_timeis generated based on the value from time.time()project: . You can find these in the Google Cloud consolewelcome page.location: SeeList of available locations
Step and walltime
Thelog_time_series_metrics API optionally acceptsstepandwalltime.
step: Optional. Step index of this data point within the run. If not provided, an increment over the latest step among all time series metrics already logged is used. If the step exists for any of the provided metric keys, the step is overwritten.wall_time: Optional. The seconds after epoch of the logged metric. If this is not provided the default is to Python'stime.time.
For example:
aiplatform.log_time_series_metrics({"mse":2500.00,"rmse":50.00})
Log to a specific step
aiplatform.log_time_series_metrics({"mse":2500.00,"rmse":50.00},step=8)
Includewall_time
aiplatform.log_time_series_metrics({"mse":2500.00,"rmse":50.00},step=10)
Parameters
Parameters are keyed input values that configure a run, regulate the behavior of the run,and affect the results of the run. Examples include learning rate, dropout rate, and number oftraining steps. Log parameters using thelog_params method.
Python
fromtypingimportDict,Unionfromgoogle.cloudimportaiplatformdeflog_params_sample(experiment_name:str,run_name:str,params:Dict[str,Union[float,int,str]],project:str,location:str,):aiplatform.init(experiment=experiment_name,project=project,location=location)aiplatform.start_run(run=run_name,resume=True)aiplatform.log_params(params)aiplatform.log_params({"learning_rate":0.01,"n_estimators":10})
experiment_name: Provide a name for your experiment. You can find your list of experiments in the Google Cloud console by selectingExperiments in the section nav.run_name: Specify a run name (seestart_run).params: Parameter key-value pairs For example:{'accuracy': 0.9}(seelog_params).welcome page.location: SeeList of available locations
Classification metrics
In addition to summary metrics and time series metrics, confusion matrices and ROC curves are commonly used metrics. They can be logged to Vertex AI Experiments using thelog_classification_metrics API.
Python
fromtypingimportList,Optionalfromgoogle.cloudimportaiplatformdeflog_classification_metrics_sample(experiment_name:str,run_name:str,project:str,location:str,labels:Optional[List[str]]=None,matrix:Optional[List[List[int]]]=None,fpr:Optional[List[float]]=None,tpr:Optional[List[float]]=None,threshold:Optional[List[float]]=None,display_name:Optional[str]=None,)->None:aiplatform.init(experiment=experiment_name,project=project,location=location)aiplatform.start_run(run=run_name,resume=True)aiplatform.log_classification_metrics(labels=labels,matrix=matrix,fpr=fpr,tpr=tpr,threshold=threshold,display_name=display_name,)experiment_name: Provide a name for your experiment. You can find your list of experiments in the Google Cloud console by selectingExperiments in the section nav.run_name: Specify a run name (seestart_run).project: . You can find these in the Google Cloud consolewelcome page.location: SeeList of available locations.labels: List of label names for the confusion matrix. Must be set if 'matrix' is set.matrix: Values for the confusion matrix. Must be set if 'labels' is set.fpr: List of false positive rates for the ROC curve. Must be set if 'tpr' or 'thresholds' is set.tpr: List of true positive rates for the ROC curve. Must be set if 'fpr' or 'thresholds' is set.threshold: List of thresholds for the ROC curve. Must be set if 'fpr' or 'tpr' is set.display_name: The user-defined name for the classification metric artifact.
View experiment runs list in the Google Cloud console
- In the Google Cloud console, go to theExperiments page.
Go to Experiments
A list of experiments appears. - Select the experiment that you want to check.
A list of runs appears.

For more details, seeCompare and analyze runs.
What's next
Notebook tutorial
Blog post
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2026-02-18 UTC.