Movatterモバイル変換


[0]ホーム

URL:


GPR - example 1

library(GPFDA)require(MASS)

Simulating data from a GP with unidimensional input

We simulate\(30\) independentrealisations from a zero-mean GP with a covariance function given by thesum of a liner kernel and a squared exponential kernel. Each observedcurve has a sample size of\(15\) timepoints on\([0,1]\).

set.seed(123)nrep<-30n<-15input<-seq(0,1,length.out=n)hp<-list('linear.a'=log(40),'linear.i'=log(10),'pow.ex.v'=log(5),'pow.ex.w'=log(15),'vv'=log(0.3))Sigma<-cov.linear(hyper=hp,input=input)+cov.pow.ex(hyper=hp,input=input,gamma=2)+diag(exp(hp$vv), n, n)Y<-t(mvrnorm(n=nrep,mu=rep(0,n),Sigma=Sigma))

Estimation

Estimation of the GPR model can be carried out without usinggradient:

set.seed(111)fitNoGrad<-gpr(input=input,response=Y,Cov=c('linear','pow.ex'),gamma=2,trace=4,nInitCandidates =1,useGradient = F)#>#>  --------- Initialising ----------#> iter:  -loglik:     linear.a     linear.i     pow.ex.v     pow.ex.w     vv#>   0:     3711.8575:  1.71278  4.17194 -2.38691 0.274907 -2.25353#>   4:     942.14351:  3.30894  3.66638 -2.96801 -0.666384 0.384511#>   8:     917.64021:  3.81648  2.23065 -2.89457 -0.528499 0.687273#>  12:     861.24361:  4.22450  2.13730 -0.248290  3.77275 0.546493#>  16:     722.70758:  3.97053  2.14354  1.23041  2.88292 -1.11003#>  20:     721.70432:  3.86213  2.11517  1.33091  2.82972 -1.20630#>  24:     721.58809:  3.77228  2.07755  1.32091  2.81484 -1.20764#>  28:     721.55626:  3.71429  2.04747  1.33447  2.81931 -1.20972#>#>      optimization finished.

If one wants to use gradient:

set.seed(111)fit<-gpr(input=input,response=Y,Cov=c('linear','pow.ex'),gamma=2,trace=4,nInitCandidates =1,useGradient = T)#>#>  --------- Initialising ----------#> iter:  -loglik:     linear.a     linear.i     pow.ex.v     pow.ex.w     vv#>   0:     3711.8575:  1.71278  4.17194 -2.38691 0.274907 -2.25353#>   4:     934.72164:  3.33584  3.79766 -2.82219 -0.407991 0.530585#>   8:     917.51640:  3.79340  2.07776 -2.64103 -0.0756082 0.692108#>  12:     810.17635:  4.56448  2.49453  1.59043  1.79717 0.0581705#>  16:     722.91927:  4.03888  2.33859  1.48098  2.76962 -1.20929#>  20:     721.55718:  3.71214  2.05899  1.33235  2.82090 -1.20784#>#>      optimization finished.

Note the smaller number of iterations needed when the gradientanalytical expressions are used in the optimisation.

We can see that the hyperparameter estimates are very accuratedespite the fairly small sample size:

sapply(fit$hyper, exp)#> linear.a linear.i pow.ex.v pow.ex.w       vv#>  41.0373   7.7446   3.7981  16.7650   0.2983

The fitted model for the\(10\)threalisation can be seen:

plot(fit,realisation=10)

#> Predicted values not found, ploting fitted values

Prediction

Predictions on a fine grid for the\(10\)th realisation can be obtained asfollows:

inputNew<-seq(0,1,length.out =1000)pred1<-gprPredict(train=fit,inputNew=inputNew,noiseFreePred=T)plot(pred1,realisation=10)

If one wants to include noise variance in the predictions for the newtime points:

pred2<-gprPredict(train=fit,inputNew=inputNew,noiseFreePred=F)plot(pred2,realisation=10)


[8]ページ先頭

©2009-2025 Movatter.jp