Simultaneous equations models are a type ofstatistical model in which thedependent variables are functions of other dependent variables, rather than just independent variables.[1] This means some of the explanatory variables arejointly determined with the dependent variable, which ineconomics usually is the consequence of some underlyingequilibrium mechanism. Take the typicalsupply and demand model: whilst typically one would determine the quantity supplied and demanded to be a function of the price set by the market, it is also possible for the reverse to be true, where producers observe the quantity that consumers demandand then set the price.[2]
Simultaneity poses challenges for theestimation of the statistical parameters of interest, because theGauss–Markov assumption ofstrict exogeneity of the regressors is violated. And while it would be natural to estimate all simultaneous equations at once, this often leads to acomputationally costly non-linear optimization problem even for the simplestsystem of linear equations.[3] This situation prompted the development, spearheaded by theCowles Commission in the 1940s and 1950s,[4] of various techniques that estimate each equation in the model seriatim, most notablylimited information maximum likelihood andtwo-stage least squares.[5]
Suppose there arem regression equations of the form
wherei is the equation number, andt = 1, ...,T is the observation index. In these equationsxit is theki×1 vector of exogenous variables,yit is the dependent variable,y−i,t is theni×1 vector of all other endogenous variables which enter theith equation on the right-hand side, anduit are the error terms. The “−i” notation indicates that the vectory−i,t may contain any of they’s except foryit (since it is already present on the left-hand side). The regression coefficientsβi andγi are of dimensionski×1 andni×1 correspondingly. Vertically stacking theT observations corresponding to theith equation, we can write each equation in vector form as
whereyi andui areT×1 vectors,Xi is aT×ki matrix of exogenous regressors, andY−i is aT×ni matrix of endogenous regressors on the right-hand side of theith equation. Finally, we can move all endogenous variables to the left-hand side and write them equations jointly in vector form as
This representation is known as thestructural form. In this equationY = [y1y2 ...ym] is theT×m matrix of dependent variables. Each of the matricesY−i is in fact anni-columned submatrix of thisY. Them×m matrix Γ, which describes the relation between the dependent variables, has a complicated structure. It has ones on the diagonal, and all other elements of each columni are either the components of the vector−γi or zeros, depending on which columns ofY were included in the matrixY−i. TheT×k matrixX contains all exogenous regressors from all equations, but without repetitions (that is, matrixX should be of full rank). Thus, eachXi is aki-columned submatrix ofX. Matrix Β has sizek×m, and each of its columns consists of the components of vectorsβi and zeros, depending on which of the regressors fromX were included or excluded fromXi. Finally,U = [u1u2 ...um] is aT×m matrix of the error terms.
Postmultiplying the structural equation byΓ −1, the system can be written in thereduced form as
This is already a simplegeneral linear model, and it can be estimated for example byordinary least squares. Unfortunately, the task of decomposing the estimated matrix into the individual factors Β andΓ −1 is quite complicated, and therefore the reduced form is more suitable for prediction but not inference.
Firstly, the rank of the matrixX of exogenous regressors must be equal tok, both in finite samples and in the limit asT → ∞ (this later requirement means that in the limit the expression should converge to a nondegeneratek×k matrix). Matrix Γ is also assumed to be non-degenerate.
Secondly, error terms are assumed to be seriallyindependent and identically distributed. That is, if thetth row of matrixU is denoted byu(t), then the sequence of vectors {u(t)} should be iid, with zero mean and some covariance matrix Σ (which is unknown). In particular, this implies thatE[U] = 0, andE[U′U] =T Σ.
Lastly, assumptions are required for identification.
Theidentification conditions require that thesystem of linear equations be solvable for the unknown parameters.
More specifically, theorder condition, a necessary condition for identification, is that for each equationki + ni ≤ k, which can be phrased as “the number of excluded exogenous variables is greater or equal to the number of included endogenous variables”.
Therank condition, a stronger condition which is necessary and sufficient, is that therank ofΠi0 equalsni, whereΠi0 is a(k − ki)×ni matrix which is obtained fromΠ by crossing out those columns which correspond to the excluded endogenous variables, and those rows which correspond to the included exogenous variables.
In simultaneous equations models, the most common method to achieveidentification is by imposing within-equation parameter restrictions.[6] Yet, identification is also possible using cross equation restrictions.
To illustrate how cross equation restrictions can be used for identification, consider the following example from Wooldridge[6]
where z's are uncorrelated with u's and y's areendogenous variables. Without further restrictions, the first equation is not identified because there is no excluded exogenous variable. The second equation is just identified ifδ13≠0, which is assumed to be true for the rest of discussion.
Now we impose the cross equation restriction ofδ12=δ22. Since the second equation is identified, we can treatδ12 as known for the purpose of identification. Then, the first equation becomes:
Then, we can use(z1,z2,z3) asinstruments to estimate the coefficients in the above equation since there are one endogenous variable (y2) and one excluded exogenous variable (z2) on the right hand side. Therefore, cross equation restrictions in place of within-equation restrictions can achieve identification.
The simplest and the most common estimation method for the simultaneous equations model is the so-calledtwo-stage least squares method,[7] developed independently byTheil (1953) andBasmann (1957).[8][9][10] It is an equation-by-equation technique, where the endogenous regressors on the right-hand side of each equation are being instrumented with the regressorsX from all other equations. The method is called “two-stage” because it conducts estimation in two steps:[7]
If theith equation in the model is written as
whereZi is aT×(ni + ki) matrix of both endogenous and exogenous regressors in theith equation, andδi is an (ni + ki)-dimensional vector of regression coefficients, then the 2SLS estimator ofδi will be given by[7]
whereP =X (X ′X)−1X ′ is the projection matrix onto the linear space spanned by the exogenous regressorsX.
Indirect least squares is an approach ineconometrics where thecoefficients in a simultaneous equations model are estimated from thereduced form model usingordinary least squares.[11][12] For this, the structural system of equations is transformed into the reduced form first. Once the coefficients are estimated the model is put back into the structural form.
The “limited information” maximum likelihood method was suggested byM. A. Girshick in 1947,[13] and formalized byT. W. Anderson andH. Rubin in 1949.[14] It is used when one is interested in estimating a single structural equation at a time (hence its name of limited information), say for observation i:
The structural equations for the remaining endogenous variables Y−i are not specified, and they are given in their reduced form:
Notation in this context is different than for the simpleIV case. One has:
The explicit formula for the LIML is:[15]
whereM =I − X (X ′X)−1X ′, andλ is the smallest characteristic root of the matrix:
where, in a similar way,Mi =I − Xi (Xi′Xi)−1Xi′.
In other words,λ is the smallest solution of thegeneralized eigenvalue problem, seeTheil (1971, p. 503):
The LIML is a special case of the K-class estimators:[16]
with:
Several estimators belong to this class:
The three-stage least squares estimator was introduced byZellner & Theil (1962).[18][19] It can be seen as a special case of multi-equationGMM where the set ofinstrumental variables is common to all equations.[20] If all regressors are in fact predetermined, then 3SLS reduces toseemingly unrelated regressions (SUR). Thus it may also be seen as a combination oftwo-stage least squares (2SLS) with SUR.
Across fields and disciplines simultaneous equation models are applied to various observational phenomena. These equations are applied when phenomena are assumed to be reciprocally causal. The classic example is supply and demand ineconomics. In other disciplines there are examples such as candidate evaluations and party identification[21] or public opinion and social policy inpolitical science;[22][23] road investment and travel demand in geography;[24] and educational attainment and parenthood entry insociology ordemography.[25] The simultaneous equation model requires a theory of reciprocal causality that includes special features if the causal effects are to be estimated as simultaneous feedback as opposed to one-sided 'blocks' of an equation where a researcher is interested in the causal effect of X on Y while holding the causal effect of Y on X constant, or when the researcher knows the exact amount of time it takes for each causal effect to take place, i.e., the length of the causal lags. Instead of lagged effects, simultaneous feedback means estimating the simultaneous and perpetual impact of X and Y on each other. This requires a theory that causal effects are simultaneous in time, or so complex that they appear to behave simultaneously; a common example are the moods of roommates.[26] To estimate simultaneous feedback models a theory of equilibrium is also necessary – that X and Y are in relatively steady states or are part of a system (society, market, classroom) that is in a relatively stable state.[27]