Movatterモバイル変換


[0]ホーム

URL:


Skip to main content
Ctrl+K
Try Ray with $100 credit —Start now.

Getting Data in and out of Tune#

Often, you will find yourself needing to pass data into TuneTrainables (datasets, models, other large parameters) and get data out of them (metrics, checkpoints, other artifacts). In this guide, we’ll explore different ways of doing that and see in what circumstances they should be used.

Let’s start by defining a simple Trainable function. We’ll be expanding this function with different functionality as we go.

importrandomimporttimeimportpandasaspddeftraining_function(config):# For now, we have nothing here.data=Nonemodel={"hyperparameter_a":None,"hyperparameter_b":None}epochs=0# Simulate training & evaluation - we obtain back a "metric" and a "trained_model".forepochinrange(epochs):# Simulate doing something expensive.time.sleep(1)metric=(0.1+model["hyperparameter_a"]*epoch/100)**(-1)+model["hyperparameter_b"]*0.1*data["A"].sum()trained_model={"state":model,"epoch":epoch}

Ourtraining_function function requires a pandas DataFrame, a model with some hyperparameters and the number of epochs to train the model for as inputs. The hyperparameters of the model impact the metric returned, and in each epoch (iteration of training), thetrained_model state is changed.

We will run hyperparameter optimization using theTuner API.

fromray.tuneimportTunerfromrayimporttunetuner=Tuner(training_function,tune_config=tune.TuneConfig(num_samples=4))

Getting data into Tune#

First order of business is to provide the inputs for the Trainable. We can broadly separate them into two categories - variables and constants.

Variables are the parameters we want to tune. They will be different for everyTrial. For example, those may be the learning rate and batch size for a neural network, number of trees and the maximum depth for a random forest, or the data partition if you are using Tune as an execution engine for batch training.

Constants are the parameters that are the same for every Trial. Those can be the number of epochs, model hyperparameters we want to set but not tune, the dataset and so on. Often, the constants will be quite large (e.g. the dataset or the model).

Warning

Objects from the outer scope of thetraining_function will also be automatically serialized and sent to Trial Actors, which may lead to unintended behavior. Examples include global locks not working (as each Actor operates on a copy) or general errors related to serialization. Best practice is to not refer to any objects from outer scope in thetraining_function.

Passing data into a Tune run through search spaces#

Note

TL;DR - use theparam_space argument to specify small, serializable constants and variables.

The first way of passing inputs into Trainables is thesearch space (it may also be calledparameter space orconfig). In the Trainable itself, it maps to theconfig dict passed in as an argument to the function. You define the search space using theparam_space argument of theTuner. The search space is a dict and may be composed ofdistributions, which will sample a different value for each Trial, or of constant values. The search space may be composed of nested dictionaries, and those in turn can have distributions as well.

Warning

Each value in the search space will be saved directly in the Trial metadata. This means that every value in the search spacemust be serializable and take up a small amount of memory.

For example, passing in a large pandas DataFrame or an unserializable model object as a value in the search space will lead to unwanted behavior. At best it will cause large slowdowns and disk space usage as Trial metadata saved to disk will also contain this data. At worst, an exception will be raised, as the data cannot be sent over to the Trial workers. For more details, seeHow can I avoid bottlenecks?.

Instead, use strings or other identifiers as your values, and initialize/load the objects inside your Trainable directly depending on those.

Note

Datasets can be used as values in the search space directly.

In our example, we want to tune the two model hyperparameters. We also want to set the number of epochs, so that we can easily tweak it later. For the hyperparameters, we will use thetune.uniform distribution. We will also modify thetraining_function to obtain those values from theconfig dictionary.

deftraining_function(config):# For now, we have nothing here.data=Nonemodel={"hyperparameter_a":config["hyperparameter_a"],"hyperparameter_b":config["hyperparameter_b"],}epochs=config["epochs"]# Simulate training & evaluation - we obtain back a "metric" and a "trained_model".forepochinrange(epochs):# Simulate doing something expensive.time.sleep(1)metric=(0.1+model["hyperparameter_a"]*epoch/100)**(-1)+model["hyperparameter_b"]*0.1*data["A"].sum()trained_model={"state":model,"epoch":epoch}tuner=Tuner(training_function,param_space={"hyperparameter_a":tune.uniform(0,20),"hyperparameter_b":tune.uniform(-100,100),"epochs":10,},)

Usingtune.with_parameters access data in Tune runs#

Note

TL;DR - use thetune.with_parameters util function to specify large constant parameters.

If we have large objects that are constant across Trials, we can use thetune.with_parameters utility to pass them into the Trainable directly. The objects will be stored in theRay object store so that each Trial worker may access them to obtain a local copy to use in its process.

Tip

Objects put into the Ray object store must be serializable.

Note that the serialization (once) and deserialization (for each Trial) of large objects may incur a performance overhead.

In our example, we will pass thedata DataFrame usingtune.with_parameters. In order to do that, we need to modify our function signature to includedata as an argument.

deftraining_function(config,data):model={"hyperparameter_a":config["hyperparameter_a"],"hyperparameter_b":config["hyperparameter_b"],}epochs=config["epochs"]# Simulate training & evaluation - we obtain back a "metric" and a "trained_model".forepochinrange(epochs):# Simulate doing something expensive.time.sleep(1)metric=(0.1+model["hyperparameter_a"]*epoch/100)**(-1)+model["hyperparameter_b"]*0.1*data["A"].sum()trained_model={"state":model,"epoch":epoch}tuner=Tuner(training_function,param_space={"hyperparameter_a":tune.uniform(0,20),"hyperparameter_b":tune.uniform(-100,100),"epochs":10,},)

Next step is to wrap thetraining_function usingtune.with_parameters before passing it into theTuner. Every keyword argument of thetune.with_parameters call will be mapped to the keyword arguments in the Trainable signature.

data=pd.DataFrame({"A":[1,2,3],"B":[4,5,6]})tuner=Tuner(tune.with_parameters(training_function,data=data),param_space={"hyperparameter_a":tune.uniform(0,20),"hyperparameter_b":tune.uniform(-100,100),"epochs":10,},tune_config=tune.TuneConfig(num_samples=4),)

Loading data in a Tune Trainable#

You can also load data directly in Trainable from e.g. cloud storage, shared file storage such as NFS, or from the local disk of the Trainable worker.

Warning

When loading from disk, ensure that all nodes in your cluster have access to the file you are trying to load.

A common use-case is to load the dataset from S3 or any other cloud storage with pandas, arrow or any other framework.

The working directory of the Trainable worker will be automatically changed to the corresponding Trial directory. For more details, seeHow do I access relative filepaths in my Tune training function?.

Our tuning run can now be run, though we will not yet obtain any meaningful outputs back.

results=tuner.fit()

Getting data out of Ray Tune#

We can now run our tuning run using thetraining_function Trainable. The next step is to reportmetrics to Tune that can be used to guide the optimization. We will also want tocheckpoint our trained models so that we can resume the training after an interruption, and to use them for prediction later.

Theray.tune.report API is used to get data out of the Trainable workers. It can be called multiple times in the Trainable function. Each call corresponds to one iteration (epoch, step, tree) of training.

Reporting metrics with Tune#

Metrics are values passed through themetrics argument in atune.report call. Metrics can be used by TuneSearch Algorithms andSchedulers to direct the search. After the tuning run is complete, you cananalyze the results, which include the reported metrics.

Note

Similarly to search space values, each value reported as a metric will be saved directly in the Trial metadata. This means that every value reported as a metricmust be serializable and take up a small amount of memory.

Note

Tune will automatically include some metrics, such as the training iteration, timestamp and more. Seehere for the entire list.

In our example, we want to maximize themetric. We will report it each epoch to Tune, and set themetric andmode arguments intune.TuneConfig to let Tune know that it should use it as the optimization objective.

fromrayimporttraindeftraining_function(config,data):model={"hyperparameter_a":config["hyperparameter_a"],"hyperparameter_b":config["hyperparameter_b"],}epochs=config["epochs"]# Simulate training & evaluation - we obtain back a "metric" and a "trained_model".forepochinrange(epochs):# Simulate doing something expensive.time.sleep(1)metric=(0.1+model["hyperparameter_a"]*epoch/100)**(-1)+model["hyperparameter_b"]*0.1*data["A"].sum()trained_model={"state":model,"epoch":epoch}tune.report(metrics={"metric":metric})tuner=Tuner(tune.with_parameters(training_function,data=data),param_space={"hyperparameter_a":tune.uniform(0,20),"hyperparameter_b":tune.uniform(-100,100),"epochs":10,},tune_config=tune.TuneConfig(num_samples=4,metric="metric",mode="max"),)

Logging metrics with Tune callbacks#

Every metric logged usingtune.report can be accessed during the tuning run through TuneCallbacks. Ray Tune providesseveral built-in integrations with popular frameworks, such as MLFlow, Weights & Biases, CometML and more. You can also use theCallback API to create your own callbacks.

Callbacks are passed in thecallback argument of theTuner’sRunConfig.

In our example, we’ll use the MLFlow callback to track the progress of our tuning run and the changing value of themetric (requiresmlflow to be installed).

importray.tunefromray.tune.logger.mlflowimportMLflowLoggerCallbackdeftraining_function(config,data):model={"hyperparameter_a":config["hyperparameter_a"],"hyperparameter_b":config["hyperparameter_b"],}epochs=config["epochs"]# Simulate training & evaluation - we obtain back a "metric" and a "trained_model".forepochinrange(epochs):# Simulate doing something expensive.time.sleep(1)metric=(0.1+model["hyperparameter_a"]*epoch/100)**(-1)+model["hyperparameter_b"]*0.1*data["A"].sum()trained_model={"state":model,"epoch":epoch}tune.report(metrics={"metric":metric})tuner=tune.Tuner(tune.with_parameters(training_function,data=data),param_space={"hyperparameter_a":tune.uniform(0,20),"hyperparameter_b":tune.uniform(-100,100),"epochs":10,},tune_config=tune.TuneConfig(num_samples=4,metric="metric",mode="max"),run_config=tune.RunConfig(callbacks=[MLflowLoggerCallback(experiment_name="example")]),)

Getting data out of Tune using checkpoints & other artifacts#

Aside from metrics, you may want to save the state of your trained model and any other artifacts to allow resumption from training failure and further inspection and usage. Those cannot be saved as metrics, as they are often far too large and may not be easily serializable. Finally, they should be persisted on disk or cloud storage to allow access after the Tune run is interrupted or terminated.

Ray Train provides aCheckpoint API for that purpose.Checkpoint objects can be created from various sources (dictionaries, directories, cloud storage).

In Ray Tune,Checkpoints are created by the user in their Trainable functions and reported using the optionalcheckpoint argument oftune.report.Checkpoints can contain arbitrary data and can be freely passed around the Ray cluster. After a tuning run is over,Checkpoints can beobtained from the results.

Ray Tune can be configured toautomatically sync checkpoints to cloud storage, keep only a certain number of checkpoints to save space (withray.tune.CheckpointConfig) and more.

Note

The experiment state itself is checkpointed separately. SeeAppendix: Types of data stored by Tune for more details.

In our example, we want to be able to resume the training from the latest checkpoint, and to save thetrained_model in a checkpoint every iteration. To accomplish this, we will use thesession andCheckpoint APIs.

importosimportpickleimporttempfilefromrayimporttunedeftraining_function(config,data):model={"hyperparameter_a":config["hyperparameter_a"],"hyperparameter_b":config["hyperparameter_b"],}epochs=config["epochs"]# Load the checkpoint, if there is any.checkpoint=tune.get_checkpoint()start_epoch=0ifcheckpoint:withcheckpoint.as_directory()ascheckpoint_dir:withopen(os.path.join(checkpoint_dir,"model.pkl"),"w")asf:checkpoint_dict=pickle.load(f)start_epoch=checkpoint_dict["epoch"]+1model=checkpoint_dict["state"]# Simulate training & evaluation - we obtain back a "metric" and a "trained_model".forepochinrange(start_epoch,epochs):# Simulate doing something expensive.time.sleep(1)metric=(0.1+model["hyperparameter_a"]*epoch/100)**(-1)+model["hyperparameter_b"]*0.1*data["A"].sum()checkpoint_dict={"state":model,"epoch":epoch}# Create the checkpoint.withtempfile.TemporaryDirectory()astemp_checkpoint_dir:withopen(os.path.join(temp_checkpoint_dir,"model.pkl"),"w")asf:pickle.dump(checkpoint_dict,f)tune.report({"metric":metric},checkpoint=tune.Checkpoint.from_directory(temp_checkpoint_dir),)tuner=tune.Tuner(tune.with_parameters(training_function,data=data),param_space={"hyperparameter_a":tune.uniform(0,20),"hyperparameter_b":tune.uniform(-100,100),"epochs":10,},tune_config=tune.TuneConfig(num_samples=4,metric="metric",mode="max"),run_config=tune.RunConfig(callbacks=[MLflowLoggerCallback(experiment_name="example")]),)

With all of those changes implemented, we can now run our tuning and obtain meaningful metrics and artifacts.

results=tuner.fit()results.get_dataframe()
2022-11-30 17:40:28,839 INFO tune.py:762 -- Total run time: 15.79 seconds (15.65 seconds for the tuning loop).
metrictime_this_iter_sshould_checkpointdonetimesteps_totalepisodes_totaltraining_iterationtrial_idexperiment_iddate...hostnamenode_iptime_since_restoretimesteps_since_restoreiterations_since_restorewarmup_timeconfig/epochsconfig/hyperparameter_aconfig/hyperparameter_blogdir
0-58.3999621.015951TrueFalseNaNNaN100b239_00000acf38c19d59c4cf2ad7955807657b6ea2022-11-30_17-40-26...ip-172-31-43-110172.31.43.11010.2821200100.0035411018.065981-98.298928/home/ubuntu/ray_results/training_function_202...
1-24.4615181.030420TrueFalseNaNNaN100b239_000015ca9e03d7cca46a7852cd501bc3f7b382022-11-30_17-40-28...ip-172-31-43-110172.31.43.11010.3625810100.004031101.544918-47.741455/home/ubuntu/ray_results/training_function_202...
218.5102991.034228TrueFalseNaNNaN100b239_00002aa38dd786c714486a8d69fa5b372df482022-11-30_17-40-28...ip-172-31-43-110172.31.43.11010.3337810100.005286108.12928528.846415/home/ubuntu/ray_results/training_function_202...
3-16.1387801.020072TrueFalseNaNNaN100b239_000035b401e15ab614332b631d552603a8d772022-11-30_17-40-28...ip-172-31-43-110172.31.43.11010.2427070100.0038091017.982020-27.867871/home/ubuntu/ray_results/training_function_202...

4 rows × 23 columns

Checkpoints, metrics, and the log directory for each trial can be accessed through theResultGrid output of a Tune experiment. For more information on how to interact with the returnedResultGrid, seeAnalyzing Tune Experiment Results.

How do I access Tune results after I am finished?#

After you have finished running the Python session, you can still access the results and checkpoints. By default, Tune will save the experiment results to the~/ray_results local directory. You can configure Tune to persist results in the cloud as well. SeeHow to Configure Persistent Storage in Ray Tune for more information on how to configure storage options for persisting experiment results.

You can restore the Tune experiment by callingTuner.restore(path_or_cloud_uri,trainable), wherepath_or_cloud_uri points to a location either on the filesystem or cloud where the experiment was saved to. After theTuner has been restored, you can access the results and checkpoints by callingTuner.get_results() to receive theResultGrid object, and then proceeding as outlined in the previous section.


[8]ページ先頭

©2009-2025 Movatter.jp