Sum#

classsklearn.gaussian_process.kernels.Sum(k1,k2)[source]#

TheSum kernel takes two kernels\(k_1\) and\(k_2\)and combines them via

\[k_{sum}(X, Y) = k_1(X, Y) + k_2(X, Y)\]

Note that the__add__ magic method is overridden, soSum(RBF(),RBF()) is equivalent to using the + operatorwithRBF()+RBF().

Read more in theUser Guide.

Added in version 0.18.

Parameters:
k1Kernel

The first base-kernel of the sum-kernel

k2Kernel

The second base-kernel of the sum-kernel

Examples

>>>fromsklearn.datasetsimportmake_friedman2>>>fromsklearn.gaussian_processimportGaussianProcessRegressor>>>fromsklearn.gaussian_process.kernelsimportRBF,Sum,ConstantKernel>>>X,y=make_friedman2(n_samples=500,noise=0,random_state=0)>>>kernel=Sum(ConstantKernel(2),RBF())>>>gpr=GaussianProcessRegressor(kernel=kernel,...random_state=0).fit(X,y)>>>gpr.score(X,y)1.0>>>kernel1.41**2 + RBF(length_scale=1)
__call__(X,Y=None,eval_gradient=False)[source]#

Return the kernel k(X, Y) and optionally its gradient.

Parameters:
Xarray-like of shape (n_samples_X, n_features) or list of object

Left argument of the returned kernel k(X, Y)

Yarray-like of shape (n_samples_X, n_features) or list of object, default=None

Right argument of the returned kernel k(X, Y). If None, k(X, X)is evaluated instead.

eval_gradientbool, default=False

Determines whether the gradient with respect to the log ofthe kernel hyperparameter is computed.

Returns:
Kndarray of shape (n_samples_X, n_samples_Y)

Kernel k(X, Y)

K_gradientndarray of shape (n_samples_X, n_samples_X, n_dims), optional

The gradient of the kernel k(X, X) with respect to the log of thehyperparameter of the kernel. Only returned wheneval_gradientis True.

propertybounds#

Returns the log-transformed bounds on the theta.

Returns:
boundsndarray of shape (n_dims, 2)

The log-transformed bounds on the kernel’s hyperparameters theta

clone_with_theta(theta)[source]#

Returns a clone of self with given hyperparameters theta.

Parameters:
thetandarray of shape (n_dims,)

The hyperparameters

diag(X)[source]#

Returns the diagonal of the kernel k(X, X).

The result of this method is identical tonp.diag(self(X)); however,it can be evaluated more efficiently since only the diagonal isevaluated.

Parameters:
Xarray-like of shape (n_samples_X, n_features) or list of object

Argument to the kernel.

Returns:
K_diagndarray of shape (n_samples_X,)

Diagonal of kernel k(X, X)

get_params(deep=True)[source]#

Get parameters of this kernel.

Parameters:
deepbool, default=True

If True, will return the parameters for this estimator andcontained subobjects that are estimators.

Returns:
paramsdict

Parameter names mapped to their values.

propertyhyperparameters#

Returns a list of all hyperparameter.

is_stationary()[source]#

Returns whether the kernel is stationary.

propertyn_dims#

Returns the number of non-fixed hyperparameters of the kernel.

propertyrequires_vector_input#

Returns whether the kernel is stationary.

set_params(**params)[source]#

Set the parameters of this kernel.

The method works on simple kernels as well as on nested kernels.The latter have parameters of the form<component>__<parameter>so that it’s possible to update each component of a nested object.

Returns:
self
propertytheta#

Returns the (flattened, log-transformed) non-fixed hyperparameters.

Note that theta are typically the log-transformed values of thekernel’s hyperparameters as this representation of the search spaceis more amenable for hyperparameter search, as hyperparameters likelength-scales naturally live on a log-scale.

Returns:
thetandarray of shape (n_dims,)

The non-fixed, log-transformed hyperparameters of the kernel