Ineconometrics,cointegration is astatistical property that describes a long-run equilibrium relationship among two or moretime series variables, even if the individual series arenon-stationary (i.e., they contain stochastic trends). In such cases, the variables may drift in the short run, but their linear combination is stationary, implying that they move together over time and remain bound by a stable equilibrium.
More formally, if several time series are individuallyintegrated of orderd (meaning they requireddifferences to become stationary) but alinear combination of them is integrated of a lower order, then those time series are said to be cointegrated. That is, if (X,Y,Z) are each integrated of orderd, and there exist coefficientsa,b,c such thataX + bY + cZ is integrated of order less than d, thenX,Y, andZ are cointegrated.
Cointegration is a crucial concept in time series analysis, particularly when dealing with variables that exhibit trends, such asmacroeconomic data. In an influential paper,[1] Charles Nelson andCharles Plosser (1982) provided statistical evidence that many US macroeconomic time series (like GNP, wages, employment, etc.) have stochastic trends.
If two or more series are individuallyintegrated (in the time series sense) but somelinear combination of them has a lowerorder of integration, then the series are said to be cointegrated. A common example is where the individual series are first-order integrated () but some (cointegrating) vector of coefficients exists to form astationary linear combination of them.
The first to introduce and analyse the concept of spurious—or nonsense—regression wasUdny Yule in 1926.[2]Before the 1980s, many economists usedlinear regressions on non-stationary time series data, which Nobel laureateClive Granger andPaul Newbold showed to be a dangerous approach that could producespurious correlation,[3] since standard detrending techniques can result in data that are still non-stationary.[4] Granger's 1987 paper withRobert Engle formalized the cointegrating vector approach, and coined the term.[5]
For integrated processes, Granger and Newbold showed that de-trending does not work to eliminate the problem of spurious correlation, and that the superior alternative is to check for co-integration. Two series with trends can be co-integrated only if there is a genuine relationship between the two. Thus the standard current methodology for time series regressions is to check all-time series involved for integration. If there are series on both sides of the regression relationship, then it is possible for regressions to give misleading results.
The possible presence of cointegration must be taken into account when choosing a technique to test hypotheses concerning the relationship between two variables havingunit roots (i.e. integrated of at least order one).[3] The usual procedure for testing hypotheses concerning the relationship between non-stationary variables was to runordinary least squares (OLS) regressions on data which had been differenced. This method is biased if the non-stationary variables are cointegrated.
For example, regressing the consumption series for any country (e.g. Fiji) against the GNP for a randomly selected dissimilar country (e.g. Afghanistan) might give a highR-squared relationship (suggesting high explanatory power on Fiji's consumption from Afghanistan'sGNP). This is calledspurious regression: two integrated series which are not directly causally related may nonetheless show a significant correlation.
The six main methods for testing for cointegration are:
If and both haveorder of integrationd=1 and are cointegrated, then a linear combination of them must be stationary for some value of and . In other words:
where is stationary.
If is known, we can test for stationarity with anAugmented Dickey–Fuller test orPhillips–Perron test. If is unknown, we must first estimate it. This is typically done by usingordinary least squares (by regressing on and an intercept). Then, we can run an ADF test on. However, when is estimated, the critical values of this ADF test are non-standard, and increase in absolute value as more regressors are included.[6]
If the variables are found to be cointegrated, a second-stage regression is conducted. This is a regression of on the lagged regressors, and the lagged residuals from the first stage,. The second stage regression is given as:
If the variables are not cointegrated (if we cannot reject the null of no cointegration when testing), then and we estimate a differences model:
TheJohansen test is a test for cointegration that allows for more than one cointegrating relationship, unlike the Engle–Granger method, but this test is subject to asymptotic properties, i.e. large samples. If the sample size is too small then the results will not be reliable and one should use Auto Regressive Distributed Lags (ARDL).[7][8]
Peter C. B. Phillips andSam Ouliaris (1990) show that residual-based unit root tests applied to the estimated cointegrating residuals do not have the usual Dickey–Fuller distributions under the null hypothesis of no-cointegration.[9] Because of the spurious regression phenomenon under the null hypothesis, the distribution of these tests have asymptotic distributions that depend on (1) the number of deterministic trend terms and (2) the number of variables with which co-integration is being tested. These distributions are known as Phillips–Ouliaris distributions and critical values have been tabulated. In finite samples, a superior alternative to the use of these asymptotic critical value is to generate critical values from simulations.
In practice, cointegration is often used for two series, but it is more generally applicable and can be used for variables integrated of higher order (to detect correlated accelerations or other second-difference effects).Multicointegration extends the cointegration technique beyond two variables, and occasionally to variables integrated at different orders.
Tests for cointegration assume that the cointegrating vector is constant during the period of study. In reality, it is possible that the long-run relationship between the underlying variables change (shifts in the cointegrating vector can occur). The reason for this might be technological progress, economic crises, changes in the people's preferences and behaviour accordingly, policy or regime alteration, and organizational or institutional developments. This is especially likely to be the case if the sample period is long. To take this issue into account, tests have been introduced for cointegration with one unknownstructural break,[10] and tests for cointegration with two unknown breaks are also available.[11]
SeveralBayesian methods have been proposed to compute the posterior distribution of the number of cointegrating relationships and the cointegrating linear combinations.[12]