Movatterモバイル変換


[0]ホーム

URL:


Skip to main content
Project Logo

Bayesian Optimization in PyTorch

IntroductionGet startedTutorials

Key Features

Modular

Modular

Plug in new models, acquisition functions, and optimizers.

Built on PyTorch

Built on PyTorch

Easily integrate neural network modules. Native GPU & autograd support.

Scalable

Scalable

Support for scalable GPs via GPyTorch. Run code on multiple devices.

References

BoTorch: A Framework for Efficient Monte-Carlo Bayesian Optimization
@inproceedings{balandat2020botorch,
title = {{BoTorch: A Framework for Efficient Monte-Carlo Bayesian Optimization}},
author = {Balandat, Maximilian and Karrer, Brian and Jiang, Daniel R. and Daulton, Samuel and Letham, Benjamin and Wilson, Andrew Gordon and Bakshy, Eytan},
booktitle = {Advances in Neural Information Processing Systems 33},
year = 2020,
url = {http://arxiv.org/abs/1910.06403}
}
Check out someother papers using BoTorch.

Get Started

  1. Install BoTorch:

    via pip (recommended):
    pip install botorch
    via Anaconda (from the unofficial conda-forge channel):
    conda install botorch -c gpytorch -c conda-forge
  2. Fit a model:

    import torch
    from botorch.modelsimport SingleTaskGP
    from botorch.models.transformsimport Normalize, Standardize
    from botorch.fitimport fit_gpytorch_mll
    from gpytorch.mllsimport ExactMarginalLogLikelihood

    train_X= torch.rand(10,2, dtype=torch.double)*2
    Y=1- torch.linalg.norm(train_X-0.5, dim=-1, keepdim=True)
    Y= Y+0.1* torch.randn_like(Y)# add some noise

    gp= SingleTaskGP(
    train_X=train_X,
    train_Y=Y,
    input_transform=Normalize(d=2),
    outcome_transform=Standardize(m=1),
    )
    mll= ExactMarginalLogLikelihood(gp.likelihood, gp)
    fit_gpytorch_mll(mll)
  3. Construct an acquisition function:

    from botorch.acquisitionimport LogExpectedImprovement

    logEI= LogExpectedImprovement(model=gp, best_f=Y.max())
  4. Optimize the acquisition function:

    from botorch.optimimport optimize_acqf

    bounds= torch.stack([torch.zeros(2), torch.ones(2)]).to(torch.double)
    candidate, acq_value= optimize_acqf(
    logEI, bounds=bounds, q=1, num_restarts=5, raw_samples=20,
    )
    candidate# tensor([[0.2981, 0.2401]], dtype=torch.float64)