Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Kriging

From Wikipedia, the free encyclopedia
Method of interpolation
Example of one-dimensional data interpolation by kriging, withcredible intervals. Squares indicate the location of the data. The kriging interpolation, shown in red, runs along the means of the normally distributed credible intervals shown in gray. The dashed curve shows a spline that is smooth, but departs significantly from the expected values given by those means.

Instatistics, originally ingeostatistics,kriging orKriging (/ˈkrɡɪŋ/), also known asGaussian process regression, is a method ofinterpolation based onGaussian process governed by priorcovariances. Under suitable assumptions of the prior, kriging gives thebest linear unbiased prediction (BLUP) at unsampled locations.[1] Interpolating methods based on other criteria such assmoothness (e.g.,smoothing spline) may not yield the BLUP. The method is widely used in the domain ofspatial analysis andcomputer experiments. The technique is also known asWiener–Kolmogorov prediction, afterNorbert Wiener andAndrey Kolmogorov.

The theoretical basis for the method was developed by the French mathematicianGeorges Matheron in 1960, based on the master's thesis ofDanie G. Krige, the pioneering plotter of distance-weighted average gold grades at theWitwatersrand reef complex inSouth Africa. Krige sought to estimate the most likely distribution of gold based on samples from a few boreholes. The English verb isto krige, and the most common noun iskriging. The word is sometimes capitalized asKriging in the literature.

Though computationally intensive in its basic formulation, kriging can be scaled to larger problems using variousapproximation methods.

Main principles

[edit]

Related terms and techniques

[edit]

Kriging predicts the value of a function at a given point by computing a weighted average of the known values of the function in the neighborhood of the point. The method is closely related toregression analysis. Both theories derive abest linear unbiased estimator based on assumptions oncovariances, make use ofGauss–Markov theorem to prove independence of the estimate and error, and use very similar formulae. Even so, they are useful in different frameworks: kriging is made for estimation of a single realization of a random field, while regression models are based on multiple observations of a multivariate data set.

The kriging estimation may also be seen as aspline in areproducing kernel Hilbert space, with the reproducing kernel given by the covariance function.[2] The difference with the classical kriging approach is provided by the interpretation: while the spline is motivated by a minimum-norm interpolation based on a Hilbert-space structure, kriging is motivated by an expected squared prediction error based on a stochastic model.

Kriging withpolynomial trend surfaces is mathematically identical togeneralized least squares polynomialcurve fitting.

Kriging can also be understood as a form ofBayesian optimization.[3] Kriging starts with apriordistribution overfunctions. This prior takes the form of a Gaussian process:N{\displaystyle N} samples from a function will benormally distributed, where thecovariance between any two samples is the covariance function (orkernel) of the Gaussian process evaluated at the spatial location of two points. Aset of values is then observed, each value associated with a spatial location. Now, a new value can be predicted at any new spatial location by combining the Gaussian prior with a Gaussianlikelihood function for each of the observed values. The resultingposterior distribution is also Gaussian, with a mean and covariance that can be simply computed from the observed values, their variance, and the kernel matrix derived from the prior.

Geostatistical estimator

[edit]

In geostatistical models, sampled data are interpreted as the result of a random process. The fact that these models incorporate uncertainty in their conceptualization doesn't mean that the phenomenon – the forest, the aquifer, the mineral deposit – has resulted from a random process, but rather it allows one to build a methodological basis for the spatial inference of quantities in unobserved locations and to quantify the uncertainty associated with the estimator.

Astochastic process is, in the context of this model, simply a way to approach the set of data collected from the samples. The first step in geostatistical modulation is to create a random process that best describes the set of observed data.

A value from locationx1{\displaystyle x_{1}} (generic denomination of a set ofgeographic coordinates) is interpreted as a realizationz(x1){\displaystyle z(x_{1})} of therandom variableZ(x1){\displaystyle Z(x_{1})}. In the spaceA{\displaystyle A}, where the set of samples is dispersed, there areN{\displaystyle N} realizations of the random variablesZ(x1),Z(x2),,Z(xN){\displaystyle Z(x_{1}),Z(x_{2}),\ldots ,Z(x_{N})}, correlated between themselves.

The set of random variables constitutes a random function, of which only one realization is known – the setz(xi){\displaystyle z(x_{i})} of observed data. With only one realization of each random variable, it's theoretically impossible to determine anystatistical parameter of the individual variables or the function. The proposed solution in the geostatistical formalism consists inassuming various degrees ofstationarity in the random function, in order to make the inference of some statistic values possible.

For instance, if one assumes, based on the homogeneity of samples in areaA{\displaystyle A} where the variable is distributed, the hypothesis that thefirst moment is stationary (i.e. all random variables have the same mean), then one is assuming that the mean can be estimated by the arithmetic mean of sampled values.

The hypothesis of stationarity related to thesecond moment is defined in the following way: the correlation between two random variables solely depends on the spatial distance between them and is independent of their location. Thus ifh=x2x1{\displaystyle \mathbf {h} =x_{2}-x_{1}} andh=|h|{\displaystyle h=|\mathbf {h} |}, then:

C(Z(x1),Z(x2))=C(Z(xi),Z(xi+h))=C(h),{\displaystyle C{\big (}Z(x_{1}),Z(x_{2}){\big )}=C{\big (}Z(x_{i}),Z(x_{i}+\mathbf {h} ){\big )}=C(h),}
γ(Z(x1),Z(x2))=γ(Z(xi),Z(xi+h))=γ(h).{\displaystyle \gamma {\big (}Z(x_{1}),Z(x_{2}){\big )}=\gamma {\big (}Z(x_{i}),Z(x_{i}+\mathbf {h} ){\big )}=\gamma (h).}

For simplicity, we defineC(xi,xj)=C(Z(xi),Z(xj)){\displaystyle C(x_{i},x_{j})=C{\big (}Z(x_{i}),Z(x_{j}){\big )}} andγ(xi,xj)=γ(Z(xi),Z(xj)){\displaystyle \gamma (x_{i},x_{j})=\gamma {\big (}Z(x_{i}),Z(x_{j}){\big )}}.

This hypothesis allows one to infer those two measures – thevariogram and thecovariogram:

γ(h)=12|N(h)|(i,j)N(h)(Z(xi)Z(xj))2,{\displaystyle \gamma (h)={\frac {1}{2|N(h)|}}\sum _{(i,j)\in N(h)}{\big (}Z(x_{i})-Z(x_{j}){\big )}^{2},}
C(h)=1|N(h)|(i,j)N(h)(Z(xi)m(h))(Z(xj)m(h)),{\displaystyle C(h)={\frac {1}{|N(h)|}}\sum _{(i,j)\in N(h)}{\big (}Z(x_{i})-m(h){\big )}{\big (}Z(x_{j})-m(h){\big )},}

where:

m(h)=12|N(h)|(i,j)N(h)Z(xi)+Z(xj){\displaystyle m(h)={\frac {1}{2|N(h)|}}\sum _{(i,j)\in N(h)}Z(x_{i})+Z(x_{j})};
N(h){\displaystyle N(h)} denotes the set of pairs of observationsi,j{\displaystyle i,\;j} such that|xixj|=h{\displaystyle |x_{i}-x_{j}|=h}, and|N(h)|{\displaystyle |N(h)|} is the number of pairs in the set.

In this set,(i,j){\displaystyle (i,\;j)} and(j,i){\displaystyle (j,\;i)} denote the same element. Generally an "approximate distance"h{\displaystyle h} is used, implemented using a certain tolerance.

Linear estimation

[edit]

Spatial inference, or estimation, of a quantityZ:RnR{\displaystyle Z\colon \mathbb {R} ^{n}\to \mathbb {R} }, at an unobserved locationx0{\displaystyle x_{0}}, is calculated from a linear combination of the observed valueszi=Z(xi){\displaystyle z_{i}=Z(x_{i})} and weightswi(x0),i=1,,N{\displaystyle w_{i}(x_{0}),\;i=1,\ldots ,N}:

Z^(x0)=[w1w2wN][z1z2zN]=i=1Nwi(x0)Z(xi).{\displaystyle {\hat {Z}}(x_{0})={\begin{bmatrix}w_{1}&w_{2}&\cdots &w_{N}\end{bmatrix}}{\begin{bmatrix}z_{1}\\z_{2}\\\vdots \\z_{N}\end{bmatrix}}=\sum _{i=1}^{N}w_{i}(x_{0})Z(x_{i}).}

The weightswi{\displaystyle w_{i}} are intended to summarize two extremely important procedures in a spatial inference process:

  • reflect the structural "proximity" of samples to the estimation locationx0{\displaystyle x_{0}};
  • at the same time, they should have a desegregation effect, in order to avoid bias caused by eventual sampleclusters.

When calculating the weightswi{\displaystyle w_{i}}, there are two objectives in the geostatistical formalism:unbias andminimal variance of estimation.

If the cloud of real valuesZ(x0){\displaystyle Z(x_{0})} is plotted against the estimated valuesZ^(x0){\displaystyle {\hat {Z}}(x_{0})}, the criterion for global unbias,intrinsic stationarity orwide sense stationarity of the field, implies that the mean of the estimations must be equal to mean of the real values.

The second criterion says that the mean of the squared deviations(Z^(x)Z(x)){\displaystyle {\big (}{\hat {Z}}(x)-Z(x){\big )}} must be minimal, which means that when the cloud of estimated valuesversus the cloud real values is more disperse, the estimator is more imprecise.

Methods

[edit]

Depending on the stochastic properties of the random field and the various degrees of stationarity assumed, different methods for calculating the weights can be deduced, i.e. different types of kriging apply. Classical methods are:

Ordinary kriging

[edit]

The unknown valueZ(x0){\displaystyle Z(x_{0})} is interpreted as a random variable located inx0{\displaystyle x_{0}}, as well as the values of neighbors samplesZ(xi), i=1,,N{\displaystyle Z(x_{i}),\ i=1,\ldots ,N}. The estimatorZ^(x0){\displaystyle {\hat {Z}}(x_{0})} is also interpreted as a random variable located inx0{\displaystyle x_{0}}, a result of the linear combination of variables.

Kriging seeks to minimize the mean square value of the following error in estimatingZ(x0){\displaystyle Z(x_{0})}, subject to lack of bias:

ϵ(x0)=Z^(x0)Z(x0)=[WT1][Z(x1)Z(xN)Z(x0)]T=i=1Nwi(x0)×Z(xi)Z(x0).{\displaystyle \epsilon (x_{0})={\hat {Z}}(x_{0})-Z(x_{0})={\begin{bmatrix}W^{T}&-1\end{bmatrix}}\cdot {\begin{bmatrix}Z(x_{1})&\cdots &Z(x_{N})&Z(x_{0})\end{bmatrix}}^{T}=\sum _{i=1}^{N}w_{i}(x_{0})\times Z(x_{i})-Z(x_{0}).}

The two quality criteria referred to previously can now be expressed in terms of the mean and variance of the new random variableϵ(x0){\displaystyle \epsilon (x_{0})}:

Lack of bias

Since the random function is stationary,E[Z(xi)]=E[Z(x0)]=m{\displaystyle E[Z(x_{i})]=E[Z(x_{0})]=m}, the weights must sum to 1 in order to ensure that the model is unbiased. This can be seen as follows:

E[ϵ(x0)]=0i=1Nwi(x0)×E[Z(xi)]E[Z(x0)]=0{\displaystyle E[\epsilon (x_{0})]=0\Leftrightarrow \sum _{i=1}^{N}w_{i}(x_{0})\times E[Z(x_{i})]-E[Z(x_{0})]=0}
mi=1Nwi(x0)m=0i=1Nwi(x0)=11TW=1.{\displaystyle \Leftrightarrow m\sum _{i=1}^{N}w_{i}(x_{0})-m=0\Leftrightarrow \sum _{i=1}^{N}w_{i}(x_{0})=1\Leftrightarrow \mathbf {1} ^{T}\cdot W=1.}
Minimum variance

Two estimators can haveE[ϵ(x0)]=0{\displaystyle E[\epsilon (x_{0})]=0}, but the dispersion around their mean determines the difference between the quality of estimators. To find an estimator with minimum variance, we need to minimizeE[ϵ(x0)2]{\displaystyle E[\epsilon (x_{0})^{2}]}.

Var(ϵ(x0))=Var([WT1][Z(x1)Z(xN)Z(x0)]T)=[WT1]Var([Z(x1)Z(xN)Z(x0)]T)[W1].{\displaystyle {\begin{aligned}\operatorname {Var} (\epsilon (x_{0}))&=\operatorname {Var} \left({\begin{bmatrix}W^{T}&-1\end{bmatrix}}\cdot {\begin{bmatrix}Z(x_{1})&\cdots &Z(x_{N})&Z(x_{0})\end{bmatrix}}^{T}\right)\\&={\begin{bmatrix}W^{T}&-1\end{bmatrix}}\cdot \operatorname {Var} \left({\begin{bmatrix}Z(x_{1})&\cdots &Z(x_{N})&Z(x_{0})\end{bmatrix}}^{T}\right)\cdot {\begin{bmatrix}W\\-1\end{bmatrix}}.\end{aligned}}}

Seecovariance matrix for a detailed explanation.

Var(ϵ(x0))=[WT1][VarxiCovxix0Covxix0TVarx0][W1],{\displaystyle \operatorname {Var} (\epsilon (x_{0}))={\begin{bmatrix}W^{T}&-1\end{bmatrix}}\cdot {\begin{bmatrix}\operatorname {Var} _{x_{i}}&\operatorname {Cov} _{x_{i}x_{0}}\\\operatorname {Cov} _{x_{i}x_{0}}^{T}&\operatorname {Var} _{x_{0}}\end{bmatrix}}\cdot {\begin{bmatrix}W\\-1\end{bmatrix}},}

where the literals{Varxi,Varx0,Covxix0}{\displaystyle \left\{\operatorname {Var} _{x_{i}},\operatorname {Var} _{x_{0}},\operatorname {Cov} _{x_{i}x_{0}}\right\}} stand for

{Var([Z(x1)Z(xN)]T),Var(Z(x0)),Cov([Z(x1)Z(xN)]T,Z(x0))}.{\displaystyle \left\{\operatorname {Var} \left({\begin{bmatrix}Z(x_{1})&\cdots &Z(x_{N})\end{bmatrix}}^{T}\right),\operatorname {Var} {\big (}Z(x_{0}){\big )},\operatorname {Cov} \left({\begin{bmatrix}Z(x_{1})&\cdots &Z(x_{N})\end{bmatrix}}^{T},Z(x_{0})\right)\right\}.}

Once defined the covariance model orvariogram,C(h){\displaystyle C(\mathbf {h} )} orγ(h){\displaystyle \gamma (\mathbf {h} )}, valid in all field of analysis ofZ(x){\displaystyle Z(x)}, then we can write an expression for the estimation variance of any estimator in function of the covariance between the samples and the covariances between the samples and the point to estimate:

{Var(ϵ(x0))=WTVarxiWCovxix0TWWTCovxix0+Varx0,Var(ϵ(x0))=Cov(0)+ijwiwjCov(xi,xj)2iwiC(xi,x0).{\displaystyle {\begin{cases}\operatorname {Var} {\big (}\epsilon (x_{0}){\big )}=W^{T}\cdot \operatorname {Var} _{x_{i}}\cdot W-\operatorname {Cov} _{x_{i}x_{0}}^{T}\cdot W-W^{T}\cdot \operatorname {Cov} _{x_{i}x_{0}}+\operatorname {Var} _{x_{0}},\\\operatorname {Var} {\big (}\epsilon (x_{0}){\big )}=\operatorname {Cov} (0)+\sum _{i}\sum _{j}w_{i}w_{j}\operatorname {Cov} (x_{i},x_{j})-2\sum _{i}w_{i}C(x_{i},x_{0}).\end{cases}}}

Some conclusions can be asserted from this expression. The variance of estimation:

  • is not quantifiable to any linear estimator, once the stationarity of the mean and of the spatial covariances, or variograms, are assumed;
  • grows when the covariance between the samples and the point to estimate decreases. This means that, when the samples are farther away fromx0{\displaystyle x_{0}}, the estimation becomes worse;
  • grows with the a priori varianceC(0){\displaystyle C(0)} of the variableZ(x){\displaystyle Z(x)}; when the variable is less disperse, the variance is lower in any point of the areaA{\displaystyle A};
  • does not depend on the values of the samples, which means that the same spatial configuration (with the same geometrical relations between samples and the point to estimate) always reproduces the same estimation variance in any part of the areaA{\displaystyle A}; this way, the variance does not measure the uncertainty of estimation produced by the local variable.
System of equations
W=argmin1TW=1(WTVarxiWCovxix0TWWTCovxix0+Varx0).{\displaystyle W={\underset {\mathbf {1} ^{T}\cdot W=1}{\operatorname {arg\,min} }}\left(W^{T}\cdot \operatorname {Var} _{x_{i}}\cdot W-\operatorname {Cov} _{x_{i}x_{0}}^{T}\cdot W-W^{T}\cdot \operatorname {Cov} _{x_{i}x_{0}}+\operatorname {Var} _{x_{0}}\right).}

Solving this optimization problem (seeLagrange multipliers) results in thekriging system:

[W^μ]=[Varxi11T0]1[Covxix01]=[γ(x1,x1)γ(x1,xn)1γ(xn,x1)γ(xn,xn)1110]1[γ(x1,x)γ(xn,x)1].{\displaystyle {\begin{bmatrix}{\hat {W}}\\\mu \end{bmatrix}}={\begin{bmatrix}\operatorname {Var} _{x_{i}}&\mathbf {1} \\\mathbf {1} ^{T}&0\end{bmatrix}}^{-1}\cdot {\begin{bmatrix}\operatorname {Cov} _{x_{i}x_{0}}\\1\end{bmatrix}}={\begin{bmatrix}\gamma (x_{1},x_{1})&\cdots &\gamma (x_{1},x_{n})&1\\\vdots &\ddots &\vdots &\vdots \\\gamma (x_{n},x_{1})&\cdots &\gamma (x_{n},x_{n})&1\\1&\cdots &1&0\end{bmatrix}}^{-1}{\begin{bmatrix}\gamma (x_{1},x^{*})\\\vdots \\\gamma (x_{n},x^{*})\\1\end{bmatrix}}.}

The additional parameterμ{\displaystyle \mu } is aLagrange multiplier used in the minimization of the kriging errorσk2(x){\displaystyle \sigma _{k}^{2}(x)} to honor the unbiasedness condition.

Simple kriging

[edit]
This section mayrequirecleanup to meet Wikipedia'squality standards. The specific problem is:this section is very poor and needs to be improved. Please helpimprove this section if you can.(January 2021) (Learn how and when to remove this message)
Simple kriging can be seen as the mean and envelope of Brownianrandom walks passing through the data points.

Simple kriging is mathematically the simplest, but the least general.[9] It assumes theexpectation of therandom field is known and relies on acovariance function. However, in most applications neither the expectation nor the covariance are known beforehand.

The practical assumptions for the application ofsimple kriging are:

The covariance function is a crucial design choice, since it stipulates the properties of the Gaussian process and thereby the behaviour of the model. The covariance function encodes information about, for instance, smoothness and periodicity, which is reflected in the estimate produced. A very common covariance function is the squared exponential, which heavily favours smooth function estimates.[10] For this reason, it can produce poor estimates in many real-world applications, especially when the true underlying function contains discontinuities and rapid changes.

System of equations

Thekriging weights ofsimple kriging have no unbiasedness condition and are given by thesimple kriging equation system:

(w1wn)=(c(x1,x1)c(x1,xn)c(xn,x1)c(xn,xn))1(c(x1,x0)c(xn,x0)).{\displaystyle {\begin{pmatrix}w_{1}\\\vdots \\w_{n}\end{pmatrix}}={\begin{pmatrix}c(x_{1},x_{1})&\cdots &c(x_{1},x_{n})\\\vdots &\ddots &\vdots \\c(x_{n},x_{1})&\cdots &c(x_{n},x_{n})\end{pmatrix}}^{-1}{\begin{pmatrix}c(x_{1},x_{0})\\\vdots \\c(x_{n},x_{0})\end{pmatrix}}.}

This is analogous to a linear regression ofZ(x0){\displaystyle Z(x_{0})} on the otherz1,,zn{\displaystyle z_{1},\ldots ,z_{n}}.

Estimation

The interpolation by simple kriging is given by

Z^(x0)=(z1zn)(c(x1,x1)c(x1,xn)c(xn,x1)c(xn,xn))1(c(x1,x0)c(xn,x0)).{\displaystyle {\hat {Z}}(x_{0})={\begin{pmatrix}z_{1}\\\vdots \\z_{n}\end{pmatrix}}'{\begin{pmatrix}c(x_{1},x_{1})&\cdots &c(x_{1},x_{n})\\\vdots &\ddots &\vdots \\c(x_{n},x_{1})&\cdots &c(x_{n},x_{n})\end{pmatrix}}^{-1}{\begin{pmatrix}c(x_{1},x_{0})\\\vdots \\c(x_{n},x_{0})\end{pmatrix}}.}

The kriging error is given by

Var(Z^(x0)Z(x0))=c(x0,x0)Var(Z(x0))(c(x1,x0)c(xn,x0))(c(x1,x1)c(x1,xn)c(xn,x1)c(xn,xn))1(c(x1,x0)c(xn,x0))Var(Z^(x0)),{\displaystyle \operatorname {Var} {\big (}{\hat {Z}}(x_{0})-Z(x_{0}){\big )}=\underbrace {c(x_{0},x_{0})} _{\operatorname {Var} {\big (}Z(x_{0}){\big )}}-\underbrace {{\begin{pmatrix}c(x_{1},x_{0})\\\vdots \\c(x_{n},x_{0})\end{pmatrix}}'{\begin{pmatrix}c(x_{1},x_{1})&\cdots &c(x_{1},x_{n})\\\vdots &\ddots &\vdots \\c(x_{n},x_{1})&\cdots &c(x_{n},x_{n})\end{pmatrix}}^{-1}{\begin{pmatrix}c(x_{1},x_{0})\\\vdots \\c(x_{n},x_{0})\end{pmatrix}}} _{\operatorname {Var} {\big (}{\hat {Z}}(x_{0}){\big )}},}

which leads to the generalised least-squares version of theGauss–Markov theorem (Chiles & Delfiner 1999, p. 159):

Var(Z(x0))=Var(Z^(x0))+Var(Z^(x0)Z(x0)).{\displaystyle \operatorname {Var} {\big (}Z(x_{0}){\big )}=\operatorname {Var} {\big (}{\hat {Z}}(x_{0}){\big )}+\operatorname {Var} {\big (}{\hat {Z}}(x_{0})-Z(x_{0}){\big )}.}

Bayesian kriging

[edit]

See alsoBayesian Polynomial Chaos

Properties

[edit]
This section mayrequirecleanup to meet Wikipedia'squality standards. The specific problem is:this section needs revision. Incorrect or confusing text should be removed. Please helpimprove this section if you can.(January 2021) (Learn how and when to remove this message)

Applications

[edit]
This section mayrequirecleanup to meet Wikipedia'squality standards. The specific problem is:this section is very poor and needs to be improved. Please helpimprove this section if you can.(January 2021) (Learn how and when to remove this message)

Although kriging was developed originally for applications in geostatistics, it is a general method of statistical interpolation and can be applied within any discipline to sampled data from random fields that satisfy the appropriate mathematical assumptions. It can be used where spatially related data has been collected (in 2-D or 3-D) and estimates of "fill-in" data are desired in the locations (spatial gaps) between the actual measurements.

To date kriging has been used in a variety of disciplines, including the following:

Design and analysis of computer experiments

[edit]

Another very important and rapidly growing field of application, inengineering, is the interpolation of data coming out as response variables of deterministic computer simulations,[28] e.g.finite element method (FEM) simulations. In this case, kriging is used as ametamodeling tool, i.e. a black-box model built over a designed set ofcomputer experiments. In many practical engineering problems, such as the design of ametal forming process, a single FEM simulation might be several hours or even a few days long. It is therefore more efficient to design and run a limited number of computer simulations, and then use a kriging interpolator to rapidly predict the response in any other design point. Kriging is therefore used very often as a so-calledsurrogate model, implemented insideoptimization routines.[29] Kriging-based surrogate models may also be used in the case of mixed integer inputs.[30]

See also

[edit]

References

[edit]
  1. ^Chung, Sang Yong; Venkatramanan, S.; Elzain, Hussam Eldin; Selvam, S.; Prasanna, M. V. (2019). "Supplement of Missing Data in Groundwater-Level Variations of Peak Type Using Geostatistical Methods".GIS and Geostatistical Techniques for Groundwater Science. Elsevier. pp. 33–41.doi:10.1016/b978-0-12-815413-7.00004-3.ISBN 978-0-12-815413-7.S2CID 189989265.
  2. ^Wahba, Grace (1990).Spline Models for Observational Data. Vol. 59. SIAM.doi:10.1137/1.9781611970128.ISBN 978-0-89871-244-5.
  3. ^Williams, C. K. I. (1998). "Prediction with Gaussian Processes: From Linear Regression to Linear Prediction and Beyond".Learning in Graphical Models. pp. 599–621.doi:10.1007/978-94-011-5014-9_23.ISBN 978-94-010-6104-9.
  4. ^Lee, Se Yoon; Mallick, Bani (2021)."Bayesian Hierarchical Modeling: Application Towards Production Results in the Eagle Ford Shale of South Texas".Sankhya B.84:1–43.doi:10.1007/s13571-020-00245-8.
  5. ^Le Gratiet, Loic; Garnier, Josselin (2014)."Recursive Co-Kriging Model for Design of Computer Experiments with Multiple Levels of Fidelity".International Journal for Uncertainty Quantification.4 (5):365–386.doi:10.1615/Int.J.UncertaintyQuantification.2014006914.ISSN 2152-5080.S2CID 14157948.
  6. ^abRanftl, Sascha; Melito, Gian Marco; Badeli, Vahid; Reinbacher-Köstinger, Alice; Ellermann, Katrin; Linden, Wolfgang von der (2019-12-09)."On the Diagnosis of Aortic Dissection with Impedance Cardiography: A Bayesian Feasibility Study Framework with Multi-Fidelity Simulation Data".Proceedings.33 (1): 24.doi:10.3390/proceedings2019033024.ISSN 2504-3900.
  7. ^abRanftl, Sascha; Melito, Gian Marco; Badeli, Vahid; Reinbacher-Köstinger, Alice; Ellermann, Katrin; von der Linden, Wolfgang (2019-12-31)."Bayesian Uncertainty Quantification with Multi-Fidelity Data and Gaussian Processes for Impedance Cardiography of Aortic Dissection".Entropy.22 (1): 58.Bibcode:2019Entrp..22...58R.doi:10.3390/e22010058.ISSN 1099-4300.PMC 7516489.PMID 33285833.
  8. ^Ranftl, Sascha; von der Linden, Wolfgang (2021-11-13)."Bayesian Surrogate Analysis and Uncertainty Propagation".Physical Sciences Forum.3 (1): 6.arXiv:2101.04038.doi:10.3390/psf2021003006.ISSN 2673-9984.
  9. ^Olea, Ricardo A. (1999).Geostatistics for Engineers and Earth Scientists. Kluwer Academic.ISBN 978-1-4615-5001-3.
  10. ^Rasmussen, Carl Edward; Williams, Christopher K. I. (2005-11-23).Gaussian Processes for Machine Learning.doi:10.7551/mitpress/3206.001.0001.ISBN 978-0-262-25683-4.
  11. ^Cressie 1993, Chiles&Delfiner 1999, Wackernagel 1995.
  12. ^Bayraktar, Hanefi; Sezer, Turalioglu (2005). "A Kriging-based approach for locating a sampling site—in the assessment of air quality".SERRA.19 (4):301–305.Bibcode:2005SERRA..19..301B.doi:10.1007/s00477-005-0234-8.S2CID 122643497.
  13. ^Chiles, J.-P. and P. Delfiner (1999)Geostatistics, Modeling Spatial Uncertainty, Wiley Series in Probability and statistics.
  14. ^Zimmerman, D. A.; De Marsily, G.;Gotway, C. A.; Marietta, M. G.; Axness, C. L.; Beauheim, R. L.; Bras, R. L.; Carrera, J.; Dagan, G.; Davies, P. B.; Gallegos, D. P.; Galli, A.; Gómez-Hernández, J.; Grindrod, P.; Gutjahr, A. L.; Kitanidis, P. K.; Lavenue, A. M.; McLaughlin, D.; Neuman, S. P.; Ramarao, B. S.; Ravenne, C.; Rubin, Y. (1998)."A comparison of seven geostatistically based inverse approaches to estimate transmissivities for modeling advective transport by groundwater flow"(PDF).Water Resources Research.34 (6):1373–1413.Bibcode:1998WRR....34.1373Z.doi:10.1029/98WR00003.
  15. ^Tonkin, M. J.; Larson, S. P. (2002). "Kriging Water Levels with a Regional-Linear and Point-Logarithmic Drift".Ground Water.40 (2):185–193.Bibcode:2002GrWat..40..185T.doi:10.1111/j.1745-6584.2002.tb02503.x.PMID 11916123.S2CID 23008603.
  16. ^Journel, A. G.; Huijbregts, C. J. (1978).Mining Geostatistics. London: Academic Press.ISBN 0-12-391050-1.
  17. ^Richmond, A. (2003). "Financially Efficient Ore Selections Incorporating Grade Uncertainty".Mathematical Geology.35 (2):195–215.Bibcode:2003MatG...35..195R.doi:10.1023/A:1023239606028.S2CID 116703619.
  18. ^Goovaerts (1997)Geostatistics for natural resource evaluation, OUP.ISBN 0-19-511538-4
  19. ^Emery, X. (2005). "Simple and Ordinary Multigaussian Kriging for Estimating Recoverable Reserves".Mathematical Geology.37 (3):295–319.Bibcode:2005MatGe..37..295E.doi:10.1007/s11004-005-1560-6.S2CID 92993524.
  20. ^Papritz, A.; Stein, A. (2002). "Spatial prediction by linear kriging".Spatial Statistics for Remote Sensing. Remote Sensing and Digital Image Processing. Vol. 1. p. 83.doi:10.1007/0-306-47647-9_6.ISBN 0-7923-5978-X.
  21. ^Barris, J.; Garcia Almirall, P. (2010)."A density function of the appraisal value"(PDF).European Real Estate Society.
  22. ^Oghenekarho Okobiah,Saraju Mohanty, and Elias Kougianos (2013)Geostatistical-Inspired Fast Layout Optimization of a Nano-CMOS Thermal Sensor.Archived 2014-07-14 at theWayback Machine, IET Circuits, Devices and Systems (CDS), Vol. 7, No. 5, Sep. 2013, pp. 253–262.
  23. ^Koziel, Slawomir (2011). "Accurate modeling of microwave devices using kriging-corrected space mapping surrogates".International Journal of Numerical Modelling: Electronic Networks, Devices and Fields.25:1–14.doi:10.1002/jnm.803.S2CID 62683207.
  24. ^Pastorello, Nicola (2014)."The SLUGGS survey: exploring the metallicity gradients of nearby early-type galaxies to large radii".Monthly Notices of the Royal Astronomical Society.442 (2):1003–1039.arXiv:1405.2338.Bibcode:2014MNRAS.442.1003P.doi:10.1093/mnras/stu937.S2CID 119221897.
  25. ^Foster, Caroline; Pastorello, Nicola; Roediger, Joel; Brodie, Jean; Forbes, Duncan; Kartha, Sreeja; Pota, Vincenzo; Romanowsky, Aaron; Spitler, Lee; Strader, Jay; Usher, Christopher; Arnold, Jacob (2016)."The SLUGGS survey: stellar kinematics, kinemetry and trends at large radii in 25 early-type galaxies".Monthly Notices of the Royal Astronomical Society.457 (1):147–171.arXiv:1512.06130.Bibcode:2016MNRAS.457..147F.doi:10.1093/mnras/stv2947.S2CID 53472235.
  26. ^Bellstedt, Sabine; Forbes, Duncan; Foster, Caroline; Romanowsky, Aaron; Brodie, Jean; Pastorello, Nicola; Alabi, Adebusola; Villaume, Alexa (2017)."The SLUGGS survey: using extended stellar kinematics to disentangle the formation histories of low-mass S) galaxies".Monthly Notices of the Royal Astronomical Society.467 (4):4540–4557.arXiv:1702.05099.Bibcode:2017MNRAS.467.4540B.doi:10.1093/mnras/stx418.S2CID 54521046.
  27. ^Lee, Se Yoon; Mallick, Bani (2021)."Bayesian Hierarchical Modeling: Application Towards Production Results in the Eagle Ford Shale of South Texas".Sankhya B.84:1–43.doi:10.1007/s13571-020-00245-8.
  28. ^Sacks, J.; Welch, W. J.; Mitchell, T. J.; Wynn, H. P. (1989)."Design and Analysis of Computer Experiments".Statistical Science.4 (4):409–435.doi:10.1214/ss/1177012413.JSTOR 2245858.
  29. ^Strano, M. (March 2008). "A technique for FEM optimization under reliability constraint of process variables in sheet metal forming".International Journal of Material Forming.1 (1):13–20.doi:10.1007/s12289-008-0001-8.S2CID 136682565.
  30. ^Saves, Paul; Diouane, Youssef; Bartoli, Nathalie; Lefebvre, Thierry; Morlier, Joseph (2023). "A mixed-categorical correlation kernel for Gaussian process".Neurocomputing.550: 126472.arXiv:2211.08262.doi:10.1016/j.neucom.2023.126472.

Further reading

[edit]
Wikimedia Commons has media related toKriging.
This "Further reading" sectionmay need cleanup. Please read theediting guide and help improve the section.(November 2014) (Learn how and when to remove this message)

Historical references

[edit]
  1. Chilès, Jean-Paul; Desassis, Nicolas (2018). "Fifty Years of Kriging".Handbook of Mathematical Geosciences. Cham: Springer International Publishing. pp. 589–612.doi:10.1007/978-3-319-78999-6_29.ISBN 978-3-319-78998-9.S2CID 125362741.
  2. Agterberg, F. P.,Geomathematics, Mathematical Background and Geo-Science Applications, Elsevier Scientific Publishing Company, Amsterdam, 1974.
  3. Cressie, N. A. C.,The origins of kriging, Mathematical Geology, v. 22, pp. 239–252, 1990.
  4. Krige, D. G.,A statistical approach to some mine valuations and allied problems at the Witwatersrand, Master's thesis of the University of Witwatersrand, 1951.
  5. Link, R. F. and Koch, G. S.,Experimental Designs and Trend-Surface Analsysis, Geostatistics, A colloquium, Plenum Press, New York, 1970.
  6. Matheron, G., "Principles of geostatistics",Economic Geology, 58, pp. 1246–1266, 1963.
  7. Matheron, G., "The intrinsic random functions, and their applications",Adv. Appl. Prob., 5, pp. 439–468, 1973.
  8. Merriam, D. F. (editor),Geostatistics, a colloquium, Plenum Press, New York, 1970.
  9. Mockus, J., "On Bayesian methods for seeking the extremum." Proceedings of the IFIP Technical Conference. 1974.

Books

[edit]
  • Abramowitz, M., and Stegun, I. (1972), Handbook of Mathematical Functions, Dover Publications, New York.
  • Banerjee, S., Carlin, B. P. and Gelfand, A. E. (2004). Hierarchical Modeling and Analysis for Spatial Data. Chapman and Hall/CRC Press, Taylor and Francis Group.
  • Chiles, J.-P. and P. Delfiner (1999)Geostatistics, Modeling Spatial uncertainty, Wiley Series in Probability and statistics.
  • Clark, I., and Harper, W. V., (2000)Practical Geostatistics 2000, Ecosse North America, USA.
  • Cressie, N. (1993)Statistics for spatial data, Wiley, New York.
  • David, M. (1988)Handbook of Applied Advanced Geostatistical Ore Reserve Estimation, Elsevier Scientific Publishing
  • Deutsch, C. V., and Journel, A. G. (1992), GSLIB – Geostatistical Software Library and User's Guide, Oxford University Press, New York, 338 pp.
  • Goovaerts, P. (1997)Geostatistics for Natural Resources Evaluation, Oxford University Press, New York,ISBN 0-19-511538-4.
  • Isaaks, E. H., and Srivastava, R. M. (1989), An Introduction to Applied Geostatistics, Oxford University Press, New York, 561 pp.
  • Journel, A. G. and C. J. Huijbregts (1978)Mining Geostatistics, Academic Press London.
  • Journel, A. G. (1989), Fundamentals of Geostatistics in Five Lessons, American Geophysical Union, Washington D.C.
  • Press, W. H.; Teukolsky, S. A.; Vetterling, W. T.; Flannery, B. P. (2007),"Section 3.7.4. Interpolation by Kriging",Numerical Recipes: The Art of Scientific Computing (3rd ed.), New York: Cambridge University Press,ISBN 978-0-521-88068-8. Also,"Section 15.9. Gaussian Process Regression".
  • Stein, M. L. (1999),Statistical Interpolation of Spatial Data: Some Theory for Kriging, Springer, New York.
  • Wackernagel, H. (1995)Multivariate Geostatistics - An Introduction with Applications, Springer Berlin
Authority control databases: NationalEdit this at Wikidata
Retrieved from "https://en.wikipedia.org/w/index.php?title=Kriging&oldid=1277903866"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2025 Movatter.jp