Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Correlogram

From Wikipedia, the free encyclopedia
Image of correlation statistics
Not to be confused withScatterplot.

A plot showing 100 random numbers with a "hidden"sine function, and an autocorrelation (correlogram) of the series on the bottom.

In the analysis of data, acorrelogram is achart ofcorrelation statistics. For example, intime series analysis, a plot of the sampleautocorrelationsrh{\displaystyle r_{h}\,} versush{\displaystyle h\,} (the time lags) is anautocorrelogram. Ifcross-correlation is plotted, the result is called across-correlogram.

The correlogram is a commonly used tool for checkingrandomness in adata set. If random, autocorrelations should be near zero for any and all time-lag separations. If non-random, then one or more of the autocorrelations will be significantly non-zero.

In addition, correlograms are used in themodel identification stage forBox–Jenkinsautoregressive moving averagetime series models. Autocorrelations should be near-zero for randomness; if the analyst does not check for randomness, then the validity of many of the statistical conclusions becomes suspect. The correlogram is an excellent way of checking for such randomness.

Inmultivariate analysis,correlation matrices shown ascolor-mapped images may also be called "correlograms" or "corrgrams".[1][2][3]

Applications

[edit]

The correlogram can help provide answers to the following questions:[4]

  • Are the data random?
  • Is an observation related to an adjacent observation?
  • Is an observation related to an observation twice-removed? (etc.)
  • Is the observed time serieswhite noise?
  • Is the observed time series sinusoidal?
  • Is the observed time series autoregressive?
  • What is an appropriate model for the observed time series?
  • Is the model
Y=constant+error{\displaystyle Y={\text{constant}}+{\text{error}}}
valid and sufficient?

Importance

[edit]

Randomness (along with fixed model, fixed variation, and fixed distribution) is one of the four assumptions that typically underlie all measurement processes. The randomness assumption is critically important for the following three reasons:

  • Most standardstatistical tests depend on randomness. The validity of the test conclusions is directly linked to the validity of the randomness assumption.
  • Many commonly used statistical formulae depend on the randomness assumption, the most common formula being the formula for determining thestandard error of the sample mean:
sY¯=s/N{\displaystyle s_{\bar {Y}}=s/{\sqrt {N}}}

wheres is thestandard deviation of the data. Although heavily used, the results from using this formula are of no value unless the randomness assumption holds.

  • For univariate data, the default model is
Y=constant+error{\displaystyle Y={\text{constant}}+{\text{error}}}

If the data are not random, this model is incorrect and invalid, and the estimates for the parameters (such as the constant) become nonsensical and invalid.

Estimation of autocorrelations

[edit]

The autocorrelation coefficient at lagh is given by

rh=ch/c0{\displaystyle r_{h}=c_{h}/c_{0}\,}

wherech is theautocovariance function

ch=1Nt=1Nh(YtY¯)(Yt+hY¯){\displaystyle c_{h}={\frac {1}{N}}\sum _{t=1}^{N-h}\left(Y_{t}-{\bar {Y}}\right)\left(Y_{t+h}-{\bar {Y}}\right)}

andc0 is thevariance function

c0=1Nt=1N(YtY¯)2{\displaystyle c_{0}={\frac {1}{N}}\sum _{t=1}^{N}\left(Y_{t}-{\bar {Y}}\right)^{2}}

The resulting value ofrh will range between −1 and +1.

Alternate estimate

[edit]

Some sources may use the following formula for the autocovariance function:

ch=1Nht=1Nh(YtY¯)(Yt+hY¯){\displaystyle c_{h}={\frac {1}{N-h}}\sum _{t=1}^{N-h}\left(Y_{t}-{\bar {Y}}\right)\left(Y_{t+h}-{\bar {Y}}\right)}

Although this definition has lessbias, the (1/N) formulation has some desirable statistical properties and is the form most commonly used in the statistics literature. See pages 20 and 49–50 in Chatfield for details.

In contrast to the definition above, this definition allows us to computech{\displaystyle c_{h}} in a slightly more intuitive way. Consider the sampleY1,,YN{\displaystyle Y_{1},\dots ,Y_{N}}, whereYiRn{\displaystyle Y_{i}\in \mathbb {R} ^{n}} fori=1,,N{\displaystyle i=1,\dots ,N}. Then, let

X=[Y1Y¯YNY¯]Rn×N{\displaystyle X={\begin{bmatrix}Y_{1}-{\bar {Y}}&\cdots &Y_{N}-{\bar {Y}}\end{bmatrix}}\in \mathbb {R} ^{n\times N}}

We then compute the Gram matrixQ=XX{\displaystyle Q=X^{\top }X}. Finally,ch{\displaystyle c_{h}} is computed as the sample mean of theh{\displaystyle h}th diagonal ofQ{\displaystyle Q}. For example, the0{\displaystyle 0}th diagonal (the main diagonal) ofQ{\displaystyle Q} hasN{\displaystyle N} elements, and its sample mean corresponds toc0{\displaystyle c_{0}}. The1{\displaystyle 1}st diagonal (to the right of the main diagonal) ofQ{\displaystyle Q} hasN1{\displaystyle N-1} elements, and its sample mean corresponds toc1{\displaystyle c_{1}}, and so on.

Statistical inference with correlograms

[edit]
Correlogram example from 400-point sample of a first-order autoregressive process with 0.75 correlation of adjacent points, along with the 95%confidence intervals (plotted about the correlation estimates in black and about zero in red), as calculated by the equations in this section. The dashed blue line shows the actual autocorrelation function of the sampled process.
20 correlograms from 400-point samples of the same random process as in the previous figure.

In the same graph one can draw upper and lower bounds for autocorrelation with significance levelα{\displaystyle \alpha \,}:

B=±z1α/2SE(rh){\displaystyle B=\pm z_{1-\alpha /2}SE(r_{h})\,} withrh{\displaystyle r_{h}\,} as the estimated autocorrelation at lagh{\displaystyle h\,}.

If the autocorrelation is higher (lower) than this upper (lower) bound, the null hypothesis that there is no autocorrelation at and beyond a given lag is rejected at a significance level ofα{\displaystyle \alpha \,}. This test is an approximate one and assumes that the time-series isGaussian.

In the above,z1−α/2 is the quantile of thenormal distribution; SE is the standard error, which can be computed byBartlett's formula for MA() processes:

SE(r1)=1N{\displaystyle SE(r_{1})={\frac {1}{\sqrt {N}}}}
SE(rh)=1+2i=1h1ri2N{\displaystyle SE(r_{h})={\sqrt {\frac {1+2\sum _{i=1}^{h-1}r_{i}^{2}}{N}}}} forh>1.{\displaystyle h>1.\,}

In the example plotted, we can reject thenull hypothesis that there is no autocorrelation between time-points which are separated by lags up to 4. For most longer periods one cannot reject thenull hypothesis of no autocorrelation.

Note that there are two distinct formulas for generating the confidence bands:

1. If the correlogram is being used to test for randomness (i.e., there is notime dependence in the data), the following formula is recommended:

±z1α/2N{\displaystyle \pm {\frac {z_{1-\alpha /2}}{\sqrt {N}}}}

whereN is thesample size,z is thequantile function of thestandard normal distribution and α is thesignificance level. In this case, the confidence bands have fixed width that depends on the sample size.

2. Correlograms are also used in the model identification stage for fittingARIMA models. In this case, amoving average model is assumed for the data and the following confidence bands should be generated:

±z1α/21N(1+2i=1kri2){\displaystyle \pm z_{1-\alpha /2}{\sqrt {{\frac {1}{N}}\left(1+2\sum _{i=1}^{k}r_{i}^{2}\right)}}}

wherek is the lag. In this case, the confidence bands increase as the lag increases.

Software

[edit]

Correlograms are available in most general purpose statistical libraries.

Correlograms:

Corrgrams:

Related techniques

[edit]

References

[edit]
  1. ^Friendly, Michael (19 August 2002)."Corrgrams: Exploratory displays for correlation matrices"(PDF).The American Statistician.56 (4).Taylor & Francis:316–324.doi:10.1198/000313002533. Retrieved19 January 2014.
  2. ^ab"CRAN – Package corrgram".cran.r-project.org. 29 August 2013. Retrieved19 January 2014.
  3. ^ab"Quick-R: Correlograms".statmethods.net. Retrieved19 January 2014.
  4. ^"1.3.3.1. Autocorrelation Plot".www.itl.nist.gov. Retrieved20 August 2018.
  5. ^"Visualization § Autocorrelation plot".

Further reading

[edit]
  • Hanke, John E.; Reitsch, Arthur G.; Wichern, Dean W.Business forecasting (7th ed.). Upper Saddle River, NJ: Prentice Hall.
  • Box, G. E. P.; Jenkins, G. (1976).Time Series Analysis: Forecasting and Control. Holden-Day.
  • Chatfield, C. (1989).The Analysis of Time Series: An Introduction (Fourth ed.). New York, NY: Chapman & Hall.

External links

[edit]

Public Domain This article incorporatespublic domain material from the National Institute of Standards and Technology

Continuous data
Center
Dispersion
Shape
Count data
Summary tables
Dependence
Graphics
Study design
Survey methodology
Controlled experiments
Adaptive designs
Observational studies
Statistical theory
Frequentist inference
Point estimation
Interval estimation
Testing hypotheses
Parametric tests
Specific tests
Goodness of fit
Rank statistics
Bayesian inference
Correlation
Regression analysis
Linear regression
Non-standard predictors
Generalized linear model
Partition of variance
Categorical
Multivariate
Time-series
General
Specific tests
Time domain
Frequency domain
Survival
Survival function
Hazard function
Test
Biostatistics
Engineering statistics
Social statistics
Spatial statistics
Retrieved from "https://en.wikipedia.org/w/index.php?title=Correlogram&oldid=1147526797"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2025 Movatter.jp