Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Releases: pytorch/botorch

Maintenance Release, PFN integration, VBLL surrogates, Classifier-based constraint support

06 May 15:42
Compare
Choose a tag to compare
Loading

Highlights

  • Prior Fitted Network (PFN) surrogate model integration (#2784).
  • Variational Bayesian last-layer models as surrogateModels (#2754).
  • Probabilities of feasibility for classifier-based constraints in acquisition functions (#2776).

New Features

  • Helper for evaluating feasibility of candidate points (#2733).
    • Check for feasibility ingen_candidates_scipy and error out for infeasible candidates (#2737).
    • Return a feasible candidate if there is one andreturn_best_only=True (#2778).
  • Allow for observation noise without providedevaluation_mask mask inModelListGP (#2735).
  • Implement incrementalqLogNEI viaincremental argument toqLogNoisyExpectedImprovement (#2760).
  • Add utility for computing AIC/BIC/MLL from a model (#2785).
  • New test functions:
    • Multi-fidelity test functions with discrete fidelities (#2796).
    • Keane bump function (#2802).
    • Mixed Ackley test function (#2830).
    • LABS test function (#2832).
  • Add parameter types to test functions to support problems defined in mixed / discrete spaces (#2809).
    • Add input validation to test functions (#2829).
  • Add[q]LogProbabilityOfFeasibility acquisition functions (#2815).

Bug Fixes

  • Remove hard-codeddtype frombest_f buffers (#2725).
  • Fixdtype/nan issue inStratifiedStandardize (#2757).
  • Properly handle observed noise inAdditiveMapSaasSingleTaskGP with outcome transforms (#2763).
  • Do not count STOPPED (due to specified budget) as a model fitting failure (#2767).
  • Ensure thatinitialize_q_batch always includes the maximum value when called in batch mode (#2773).
  • Fix posterior with observation noise in batched MTGP models (#2782).
  • Detach tensor ingen_candidates_scipy to avoid test failure due to new warning (#2797).
  • Fix batch computation in Pivoted Cholesky (#2823).

Other Changes

  • Add optimal values for synthetic contrained optimization problems (#2730).
    • Updatemax_hv and reference point for Penicillin problem (#2771).
    • Add optimal value to SpeedReducer problem (#2799).
  • Updatenonlinear_constraint_is_feasible to return a boolean tensor (#2731).
  • Restructure sampling methods for info-theoretic acquisition functions (#2753).
  • Prune baseline points inqLogNEI by default (#2762).
  • Misc updates to MES-based acqusition functions (#2769).
  • Pass option to reset submodules in train method for fully Bayesian models (#2783).
  • Put outcome transforms into train mode in model constructors (#2817).
  • LogEI: selectcache_root based on model support (#2820).
  • Remove Ax dependency from BoTorch tutorials and reference Ax tutorials instead (#2839).

Deprecations and removals

  • Remove deprecatedgp_sampling module (#2768).
  • RemoveqMultiObjectiveMaxValueEntropy acquisition function (#2800).
  • Remove model converters (#2801).
Assets2
Loading

Maintenance Release, Website Upgrade, BO with Relevance Pursuit, LatentKroneckerGP and MAP-SAAS Models

03 Feb 16:37
Compare
Choose a tag to compare
Loading

Highlights

  • BoTorch website has been upgraded to utilize Docusaurus v3, with the API
    reference being hosted by ReadTheDocs. The tutorials now expose an option to
    open with Colab, for easy access to a runtime with modifiable tutorials.
    The old versions of the website can be found at archive.botorch.org (#2653).
  • RobustRelevancePursuitSingleTaskGP, a robust Gaussian process model that adaptively identifies
    outliers and leverages Bayesian model selection (paper) (#2608,#2690,#2707).
  • LatentKroneckerGP, a scalable model for data on partially observed grids, like the joint modeling
    of hyper-parameters and partially completed learning curves in AutoML (paper) (#2647).
  • Add MAP-SAAS model, which utilizes the sparse axis-aligned subspace priors
    (paper) with MAP model fitting (#2694).

Compatibility

  • Require GPyTorch==1.14 and linear_operator==0.6 (#2710).
  • Remove support for anaconda (official package) (#2617).
  • Removempmath dependency pin (#2640).
  • Updates to optimization routines to support SciPy>1.15:
    • Usethreadpoolctl inminimize_with_timeout to prevent CPU oversubscription (#2712).
    • Update optimizer output parsing to make model fitting compatible with SciPy>1.15 (#2667).

New Features

  • Add support for priors in OAK Kernel (#2535).
  • AddBatchBroadcastedTransformList, which broadcasts a list ofInputTransforms over batch shapes (#2558).
  • InteractionFeatures input transform (#2560).
  • Implementpercentile_of_score, which takes inputsdata andscore, and returns the percentile of
    values indata that are belowscore (#2568).
  • Addoptimize_acqf_mixed_alternating, which supports optimization over mixed discrete & continuous spaces (#2573).
  • Add support forPosteriorTransform toget_optimal_samples andoptimize_posterior_samples (#2576).
  • Support inequality constraints &X_avoid inoptimize_acqf_discrete (#2593).
  • Add ability to mix batch initial conditions and internal IC generation (#2610).
  • AddqPosteriorStandardDeviation acquisition function (#2634).
  • TopK downselection for initial batch generation. (#2636).
  • Support optimization over mixed spaces inoptimize_acqf_homotopy (#2639).
  • AddInfeasibilityError exception class (#2652).
  • SupportInputTransforms inSparseOutlierLikelihood andget_posterior_over_support (#2659).
  • StratifiedStandardize outcome transform (#2671).
  • Addcenter argument toNormalize (#2680).
  • Add input normalization step inWarp input transform (#2692).
  • Support mixing fully Bayesian &SingleTaskGP models inModelListGP (#2693).
  • Add abstract fully Bayesian GP class and fully Bayesian linear GP model (#2696,#2697).
  • Tutorial on BO constrained by probability of classification model (#2700).

Bug Fixes

  • Fix error in decoupled_mobo tutorial due to torch/numpy issues (#2550).
  • Raise error for MTGP inbatch_cross_validation (#2554).
  • Fixposterior method inBatchedMultiOutputGPyTorchModel for tracing JIT (#2592).
  • Replace hard-coded double precision in test_functions with default dtype (#2597).
  • Removeas_tensor argument ofset_tensors_from_ndarray_1d (#2615).
  • Skip fixed feature enumerations inoptimize_acqf_mixed that can't satisfy the parameter constraints (#2614).
  • Fixget_default_partitioning_alpha for >7 objectives (#2646).
  • Fix random seed handling insample_hypersphere (#2688).
  • Fix bug inoptimize_objective with fixed features (#2691).
  • FullyBayesianSingleTaskGP.train should not returnNone (#2702).

Other Changes

  • More efficient sampling fromKroneckerMultiTaskGP (#2460).
  • UpdateHigherOrderGP to use new priors & standardize outcome transform by default (#2555).
  • Updateinitialize_q_batch methods to return both candidates and the corresponding acquisition values (#2571).
  • Update optimization documentation with LogEI insights (#2587).
  • Make all arguments inoptimize_acqf_homotopy explicit (#2588).
  • Introducetrial_indices argument toSupervisedDataset (#2595).
  • Make optimizers raise an error when provided negative indices for fixed features (#2603).
  • Make input transformsModules by default (#2607).
  • Reduce memory usage inConstrainedMaxPosteriorSampling (#2622).
  • Addclone method to datasets (#2625).
  • Add support for continuous relaxation withinoptimize_acqf_mixed_alternating (#2635).
  • Update indexing inqLogNEI._get_samples_and_objectives to support multiple input batches (#2649).
  • PassX toOutcomeTransforms (#2663).
  • Use mini-batches when evaluating candidates withinoptimize_acqf_discrete_local_search (#2682).

Deprecations

  • RemoveHeteroskedasticSingleTaskGP (#2616).
  • RemoveFixedNoiseDataset (#2626).
  • Remove support for legacy format non-linear constraints (#2627).
  • Removemaximize option from information theoretic acquisition functions (#2590).
Loading
Balandat reacted with hooray emoji
1 person reacted

Increased robustness to dimensionality with updated hyperparameter priors

17 Sep 16:27
Compare
Choose a tag to compare
Loading

[0.12.0] -- Sep 17, 2024

Major changes

  • Update most models to use dimension-scaled log-normal hyperparameter priors by
    default, which makes performance much more robust to dimensionality. See
    discussion#2451 for details. The only models that arenot changed are those
    for fully Bayesian models andPairwiseGP; for models that utilize a
    composite kernel, such as multi-fidelity/task/context, this change only
    affects the base kernel (#2449,#2450,#2507).
  • UseStandarize by default in all the models using the upgraded priors. In
    addition to reducing the amount of boilerplate needed to initialize a model,
    this change was motivated by the change to default priors, because the new
    priors will work less well when data is not standardized. Users who do not
    want to use transforms should explicitly pass inNone (#2458,#2532).

Compatibility

  • Unpin NumPy (#2459).
  • Require PyTorch>=2.0.1, GPyTorch==1.13, and linear_operator==0.5.3 (#2511).

New features

  • IntroducePathwiseThompsonSampling acquisition function (#2443).
  • EnableqBayesianActiveLearningByDisagreement to accept a posterior
    transform, and improve its implementation (#2457).
  • EnableSaasPyroModel to sample via NUTS when training data is empty (#2465).
  • Add multi-objectiveqBayesianActiveLearningByDisagreement (#2475).
  • Add input constructor forqNegIntegratedPosteriorVariance (#2477).
  • IntroduceqLowerConfidenceBound (#2517).
  • Add input constructor forqMultiFidelityHypervolumeKnowledgeGradient (#2524).
  • Addposterior_transform toApproximateGPyTorchModel.posterior (#2531).

Bug fixes

  • Fixbatch_shape default inOrthogonalAdditiveKernel (#2473).
  • Ensure all tensors are on CPU inHitAndRunPolytopeSampler (#2502).
  • Fix duplicate logging ingeneration/gen.py (#2504).
  • Raise exception ifX_pending is set on the underlyingAcquisitionFunction
    in prior-guidedAcquisitionFunction (#2505).
  • Make affine input transforms error with data of incorrect dimension, even in
    eval mode (#2510).
  • Use fidelity-awarecurrent_value in input constructor forqMultiFidelityKnowledgeGradient (#2519).
  • Apply input transforms when computing MLL in model closures (#2527).
  • Detachfval intorch_minimize to remove an opportunity for memory leaks
    (#2529).

Documentation

  • Clarify incompatibility of inter-point constraints withget_polytope_samples
    (#2469).
  • Update tutorials to use the log variants of EI-family acquisition functions,
    don't make tutorials passStandardize unnecessarily, and other
    simplifications and cleanup (#2462,#2463,#2490,#2495,#2496,#2498,#2499).
  • Remove deprecatedFixedNoiseGP (#2536).

Other changes

  • More informative warnings about failure to standardize or normalize data
    (#2489).
  • Suppress irrelevant warnings inqHypervolumeKnowledgeGradient helpers
    (#2486).
  • Cleanerbotorch/acquisition/multi_objective directory structure (#2485).
  • WithAffineInputTransform, always require data to have at least two
    dimensions (#2518).
  • Remove deprecated argumentdata_fidelity toSingleTaskMultiFidelityGP and
    deprecated modelFixedNoiseMultiFidelityGP (#2532).
  • Raise anOptimizationGradientError when optimization produces NaN gradients (#2537).
  • Improve numerics by replacingtorch.log(1 + x) withtorch.log1p(x)
    andtorch.exp(x) - 1 withtorch.special.expm1 (#2539,#2540,#2541).
Loading

Maintenance Release, I-BNN Kernel

22 Jul 20:53
Compare
Choose a tag to compare
Loading

Compatibility

  • Pin NumPy to <2.0 (#2382).
  • Require GPyTorch 1.12 and LinearOperator 0.5.2 (#2408,#2441).

New features

  • Support evaluating posterior predictive inMultiTaskGP (#2375).
  • Infinite width BNN kernel (#2366) and the corresponding tutorial (#2381).
  • An improved elliptical slice sampling implementation (#2426).
  • Add a helper for producing aDeterministicModel using a Matheron path (#2435).

Deprecations and Deletions

  • Stop allowing some arguments to be ignored in acqf input constructors (#2356).
  • Reap deprecated**kwargs argument fromoptimize_acqf variants (#2390).
  • DeleteDeterministicPosterior andDeterministicSampler (#2391,#2409,#2410).
  • Removed deprecatedCachedCholeskyMCAcquisitionFunction (#2399).
  • Deprecate model conversion code (#2431).
  • Deprecategp_sampling module in favor of pathwise sampling (#2432).

Bug Fixes

  • Fix observation noise shape for batched models (#2377).
  • Fixsample_all_priors to not sample one value for all lengthscales (#2404).
  • Make(Log)NoisyExpectedImprovement create a correct fantasy model with
    non-defaultSingleTaskGP (#2414).

Other Changes

Loading

Maintenance Release

11 Jun 23:45
Compare
Choose a tag to compare
Loading

New Features

  • ImplementqLogNParEGO (#2364).
  • Support picking best of multiple fit attempts infit_gpytorch_mll (#2373).

Deprecations

  • Many functions that used to silently ignore arbitrary keyword arguments will now
    raise an exception when passed unsupported arguments (#2327,#2336).
  • RemoveUnstandardizeMCMultiOutputObjective andUnstandardizePosteriorTransform (#2362).

Bug Fixes

  • Remove correlation between the step size and the step direction insample_polytope (#2290).
  • Fix pathwise sampler bug (#2337).
  • Explicitly check timeout againstNone so that0.0 isn't ignored (#2348).
  • Fix boundary handling insample_polytope (#2353).
  • Avoid division by zero innormalize &unnormalize when lower & upper bounds are equal (#2363).
  • Updatesample_all_priors to support wider set of priors (#2371).

Other Changes

  • Clarifyis_non_dominated behavior with NaN (#2332).
  • Add input constructor forqEUBO (#2335).
  • AddLogEI as a baseline in theTuRBO tutorial (#2355).
  • Update polytope sampling code and add thinning capability (#2358).
  • Add initial objective values to initial state for sample efficiency (#2365).
  • Clarify behavior on standard deviations with <1 degree of freedom (#2357).
Loading

Maintenance Release, SCoreBO

01 May 20:17
Compare
Choose a tag to compare
Loading

Compatibility

  • Reqire Python >= 3.10 (#2293).

New Features

  • SCoreBO and Bayesian Active Learning acquisition functions (#2163).

Bug Fixes

  • Fix non-None constraint noise levels in some constrained test problems (#2241).
  • Fix inverse cost-weighted utility behaviour for non-positive acquisition values (#2297).

Other Changes

  • Don't allow unused keyword arguments inModel.construct_inputs (#2186).
  • Re-map task values in MTGP if they are not contiguous integers starting from zero (#2230).
  • UnifyModelList andModelListGPsubset_output behavior (#2231).
  • Ensuremean andinterior_point ofLinearEllipticalSliceSampler have correct shapes (#2245).
  • Speed up task covariance ofLCEMGP (#2260).
  • Improvements tobatch_cross_validation, support for model init kwargs (#2269).
  • Support customall_tasks for MTGPs (#2271).
  • Error out if scipy optimizer does not support bounds / constraints (#2282).
  • Support diagonal covariance root with fixed indices forLinearEllipticalSliceSampler (#2283).
  • MakeqNIPV a subclass ofAcquisitionFunction rather thanAnalyticAcquisitionFunction (#2286).
  • Increase code-sharing ofLCEMGP & defineconstruct_inputs (#2291).

Deprecations

  • Remove deprecated args from baseMCSampler (#2228).
  • Remove deprecatedbotorch/generation/gen/minimize (#2229).
  • Removefit_gpytorch_model (#2250).
  • Removerequires_grad_ctx (#2252).
  • Removebase_samples argument ofGPyTorchPosterior.rsample (#2254).
  • Remove deprecatedmvn argument toGPyTorchPosterior (#2255).
  • Remove deprecatedPosterior.event_shape (#2320).
  • Remove**kwargs & deprecatedindices argument ofRound transform (#2321).
  • RemoveStandardize.load_state_dict (#2322).
  • RemoveFixedNoiseMultiTaskGP (#2323).
Loading
jungtaekkim reacted with hooray emoji
1 person reacted

Maintenance Release, Updated Community Contributions

27 Feb 05:58
Compare
Choose a tag to compare
Loading

New Features

  • Introduce updated guidelines and a new directory for community contributions (#2167).
  • AddqEUBO preferential acquisition function (#2192).
  • Add Multi Information Source Augmented GP (#2152).

Bug Fixes

  • Fixcondition_on_observations in fully Bayesian models (#2151).
  • Fix for bug that occurs when splitting single-element bins, use default BoTorch kernel for BAxUS. (#2165).
  • Fix a bug when non-linear constraints are used withq > 1 (#2168).
  • Remove unsupportedX_pending fromqMultiFidelityLowerBoundMaxValueEntropy constructor (#2193).
  • Don't allowdata_fidelities=[] inSingleTaskMultiFidelityGP (#2195).
  • FixEHVI,qEHVI, andqLogEHVI input constructors (#2196).
  • Fix input constructor forqMultiFidelityMaxValueEntropy (#2198).
  • Add ability to not deduplicate points in_is_non_dominated_loop (#2203).

Other Changes

  • Minor improvements toMVaR risk measure (#2150).
  • Add support for multitask models toModelListGP (#2154).
  • Support unspecified noise inContextualDataset (#2155).
  • UpdateHVKG sampler to reflect the number of model outputs (#2160).
  • Release restriction inOneHotToNumeric that the categoricals are the trailing dimensions (#2166).
  • Standardize broadcasting logic ofq(Log)EI'sbest_f andcompute_best_feasible_objective (#2171).
  • Use regular inheritance instead of dispatcher to special-casePairwiseGP logic (#2176).
  • SupportPBO inEUBO's input constructor (#2178).
  • Addposterior_transform toqMaxValueEntropySearch's input constructor (#2181).
  • Do not normalize or standardize dimension if all values are equal (#2185).
  • Reap deprecated support for objective with 1 arg inGenericMCObjective (#2199).
  • Consistent signature forget_objective_weights_transform (#2200).
  • Update context order handling inContextualDataset (#2205).
  • Update contextual models for use in MBM (#2206).
  • Remove(Identity)AnalyticMultiOutputObjective (#2208).
  • Reap deprecated support forsoft_eval_constraint (#2223). Please usebotorch.utils.sigmoid instead.

Compatibility

  • Pinmpmath <= 1.3.0 to avoid CI breakages due to removed modules in the latest alpha release (#2222).
Loading
huynhdev24 reacted with thumbs up emoji
1 person reacted

Hypervolume Knowledge Gradient (HVKG)

09 Dec 01:58
Compare
Choose a tag to compare
Loading

New features

Hypervolume Knowledge Gradient (HVKG):

  • AddqHypervolumeKnowledgeGradient, which seeks to maximize the difference in hypervolume of the hypervolume-maximizing set of a fixed size after conditioning the unknown observation(s) that would be received if X were evaluated (#1950,#1982,#2101).
  • Add tutorial on decoupled Multi-Objective Bayesian Optimization (MOBO) with HVKG (#2094).

Other new features:

  • AddMultiOutputFixedCostModel, which is useful for decoupled scenarios where the objectives have different costs (#2093).
  • Enableq > 1 in acquisition function optimization when nonlinear constraints are present (#1793).
  • Support different noise levels for different outputs in test functions (#2136).

Bug fixes

  • Fix fantasization with aFixedNoiseGaussianLikelihood whennoise is known andX is empty (#2090).
  • MakeLearnedObjective compatible with constraints in acquisition functions regardless ofsample_shape (#2111).
  • Make input constructors forqExpectedImprovement,qLogExpectedImprovement, andqProbabilityOfImprovement compatible withLearnedObjective regardless ofsample_shape (#2115).
  • Fix handling of constraints inqSimpleRegret (#2141).

Other changes

  • Increase default sample size forLearnedObjective (#2095).
  • Allow passing inX with or without fidelity dimensions inproject_to_target_fidelity (#2102).
  • Use full-rank task covariance matrix by default in SAAS MTGP (#2104).
  • RenameFullyBayesianPosterior toGaussianMixturePosterior; add_is_ensemble and_is_fully_bayesian attributes toModel (#2108).
  • Various improvements to tutorials including speedups, improved explanations, and compatibility with newer versions of libraries.
Loading
AttentiveNader, dionyce, zyh0911, and rafaol reacted with heart emoji
4 people reacted

Bugfix release

06 Nov 23:26
Compare
Choose a tag to compare
Loading

Compatibility

  • Re-establish compatibility with PyTorch 1.13.1 (#2083).
Loading

Multi-Objective "Log" acquisition functions

03 Nov 00:31
Compare
Choose a tag to compare
Loading

Highlights

  • Additional "Log" acquisition functions for multi-objective optimization with better numerical behavior, which often leads to significantly improved BO performance over their non-"Log" counterparts:
  • FixedNoiseGP andFixedNoiseMultiFidelityGP have been deprecated, their functionalities merged intoSingleTaskGP andSingleTaskMultiFidelityGP, respectively (#2052,#2053).
  • Removed deprecated legacy model fitting functions:numpy_converter,fit_gpytorch_scipy,fit_gpytorch_torch,_get_extra_mll_args (#1995,#2050).

New Features

  • Support multiple data fidelity dimensions inSingleTaskMultiFidelityGP and (deprecated)FixedNoiseMultiFidelityGP models (#1956).
  • Addlogsumexp andfatmax to handle infinities and control asymptotic behavior in "Log" acquisition functions (#1999).
  • Add outcome and feature names to datasets, implementMultiTaskDataset (#2015,#2019).
  • Add constrained Hartmann and constrained Gramacy synthetic test problems (#2022,#2026,#2027).
  • Support observed noise inMixedSingleTaskGP (#2054).
  • AddPosteriorStandardDeviation acquisition function (#2060).

Bug fixes

  • Fix input constructors forqMaxValueEntropy andqMultiFidelityKnowledgeGradient (#1989).
  • Fix precision issue that arises from inconsistent data types inLearnedObjective (#2006).
  • Fix fantasization withFixedNoiseGP and outcome transforms and useFantasizeMixin (#2011).
  • FixLearnedObjective base sample shape (#2021).
  • Apply constraints inprune_inferior_points (#2069).
  • Support non-batch evaluation ofPenalizedMCObjective (#2073).
  • FixDataset equality checks (#2077).

Other changes

  • Don't allow unused**kwargs in input_constructors except for a defined set of exceptions (#1872,#1985).
  • Merge inferred and fixed noise LCE-M models (#1993).
  • Fix import structure inbotorch.acquisition.utils (#1986).
  • Remove deprecated functionality:weights argument ofRiskMeasureMCObjective andsqueeze_last_dim (#1994).
  • MakeX,Y,Yvar into properties in datasets (#2004).
  • Make synthetic constrained test functions subclass fromSyntheticTestFunction (#2029).
  • Addconstruct_inputs to contextual GP modelsLCEAGP andSACGP (#2057).
Loading
jinjiaodawang and pabloprf reacted with hooray emoji
2 people reacted
Previous1345
Previous

[8]ページ先頭

©2009-2025 Movatter.jp