Econometrics is an application ofstatistical methods to economic data in order to giveempirical content to economic relationships.[1] More precisely, it is "the quantitative analysis of actual economicphenomena based on the concurrent development of theory and observation, related by appropriate methods of inference."[2] An introductoryeconomics textbook describes econometrics as allowing economists "to sift through mountains of data to extract simple relationships."[3]Jan Tinbergen is one of the two founding fathers of econometrics.[4][5][6] The other,Ragnar Frisch, also coined the term in the sense in which it is used today.[7]
A basic tool for econometrics is themultiple linear regression model.[8] In modern econometrics, other statistical tools are frequently used, but linear regression is still the most frequently used starting point for an analysis.[8] Estimating a linear regression on two variables can be visualized as fitting a line through data points representing paired values of the independent and dependent variables.
Okun's law representing the relationship between GDP growth and the unemployment rate. The fitted line is found using regression analysis.
One of the earliest use cases oflinear regression was conducted in 1889 by British statistician,G. Udney Yule. Yule was seeking to causally identify a relationship between public assistance and the number of impoverished people in England.[12] Using two time periods from the 1871 and 1881 census data at the county level, he estimated the effect of receivingsocial assistance on therate of poverty.[13] While it would likely not meet the threshold today given concerns oftwo-way causality andbias in the error term, his regression specification was the following:
Another example, one that is rarely used today, can be found in macroeconomics. ConsiderOkun's law, which relatesGDP growth to the unemployment rate. This relationship is represented in a linear regression where the change in unemployment rate () is a function of an intercept (), a given value of GDP growth multiplied by a slope coefficient and an error term,:
The unknown parameters and can be estimated. Here is estimated to be 0.83 and is estimated to be -1.77. This means that if GDP growth increased by one percentage point, the unemployment rate would be predicted to drop by 1.77* 1 points,other things held constant. The model could then be tested forstatistical significance as to whether an increase in GDP growth is associated with a decrease in the unemployment, ashypothesized. If the estimate of were not significantly different from 0, the test would fail to find evidence that changes in the growth rate and unemployment rate were related. The variance in a prediction of the dependent variable (unemployment) as a function of the independent variable (GDP growth) is given inpolynomial least squares.
Econometrics uses standardstatistical models to study economic questions, but most often these are based onobservational data, rather than data fromcontrolled experiments.[15] In this, the design of observational studies in econometrics is similar to the design of studies in other observational disciplines, such as astronomy, epidemiology, sociology and political science. Analysis of data from an observational study is guided by the study protocol, althoughexploratory data analysis may be useful for generating new hypotheses.[16] Economics often analyses systems of equations and inequalities, such assupply and demand hypothesized to be inequilibrium. Consequently, the field of econometrics has developed methods foridentification andestimation ofsimultaneous equations models. These methods are analogous to methods used in other areas of science, such as the field ofsystem identification insystems analysis andcontrol theory. Such methods may allow researchers to estimate models and investigate their empirical consequences, without directly manipulating the system.
A simple example of a relationship in econometrics from the field oflabour economics is:
This example assumes that thenatural logarithm of a person's wage is a linear function of the number of years of education that person has acquired. The parameter measures the increase in the natural log of the wage attributable to one more year of education. The term is a random variable representing all other factors that may have direct influence on wage. The econometric goal is to estimate the parameters, under specific assumptions about the random variable. For example, if is uncorrelated with years of education, then the equation can be estimated withordinary least squares.
If the researcher could randomly assign people to different levels of education, the data set thus generated would allow estimation of the effect of changes in years of education on wages. In reality, those experiments cannot be conducted. Instead, the econometrician observes the years of education of and the wages paid to people who differ along many dimensions. Given this kind of data, the estimated coefficient on years of education in the equation above reflects both the effect of education on wages and the effect of other variables on wages, if those other variables were correlated with education. For example, people born in certain places may have higher wages and higher levels of education. Unless the econometrician controls for place of birth in the above equation, the effect of birthplace on wages may be falsely attributed to the effect of education on wages.
The most obvious way to control for birthplace is to include a measure of the effect of birthplace in the equation above. Exclusion of birthplace, together with the assumption that is uncorrelated with education produces a misspecified model. Another technique is to include in the equation additional set of measured covariates which are not instrumental variables, yet render identifiable.[18] An overview of econometric methods used to study this problem were provided byCard (1999).[19]
Integrating statistics into economic theory to develop claims of causality lead to disagreements within the discipline, resulting in criticisms of econometrics. Most of these criticisms have been resolved as a result of thecredibility revolution and the improved rigor of thepotential outcomes framework, used today by applied economists, microeconomists and econometricians for generating results that interpret causality. While developments by econometricians began in the mid 1960s to improve statistical measures, the publication in 2009 ofMostly Harmless Econometrics by economistsJoshua D. Angrist andJörn-Steffen Pischke has summarized the improvements in econometric modeling.Structural causal modeling, which attempts to formalize the limitations of quasi-experimental methods from a causality perspective, allowing experimenters to precisely quantify the risks of quasi-experimental research, is the primary academic response to this critique.
Like other forms of statistical analysis, badly specified econometric models may show aspurious relationship where two variables are correlated but causally unrelated. In a study of the use of econometrics in major economics journals,McCloskey concluded that some economists reportp-values (following theFisherian tradition oftests of significance of pointnull-hypotheses) and neglect concerns oftype II errors; some economists fail to report estimates of the size of effects (apart fromstatistical significance) and to discuss their economic importance. She also argues that some economists also fail to use economic reasoning formodel selection, especially for deciding which variables to include in a regression.[28][29]
In some cases, economic variables cannot be experimentally manipulated as treatments randomly assigned to subjects.[30] In such cases, economists rely onobservational studies, often using data sets with many strongly associatedcovariates, resulting in enormous numbers of models with similar explanatory ability but different covariates and regression estimates. Regarding the plurality of models compatible with observational data-sets,Edward Leamer urged that "professionals ... properly withhold belief until an inference can be shown to be adequately insensitive to the choice of assumptions".[30]
Like other forms of statistical analysis, badly specified econometric models may show aspurious correlation where two variables are correlated but causally unrelated. EconomistRonald Coase is widely reported to have said "if you torture the data long enough it will confess".[31]Deirdre McCloskey argues that in published econometric work, economists often fail to use economic reasoning for including or excluding variables, equivocate statistical significance with substantial significance, and fail to report thepower of their findings.[32]
Economic variables are observed in reality, and therefore are not readily isolated for experimental testing.Edward Leamer argued there was no essential difference between econometric analysis andrandomized trials orcontrolled trials, provided the use of statistical techniques reduces the specification bias, the effects of collinearity between the variables, to the same order as the uncertainty due to the sample size.[33] Today, this critique is unbinding, as advances in identification are stronger. Identification today may report the average treatment effect (ATE), the average treatment effecton the treated (ATT), or thelocal average treatment effect (LATE).[13]Specification bias, orselection bias can be easily removed, through advances in sampling techniques and the ability to sample much larger populations through improved communications, data storage, andrandomization techniques. Secondly, collinearity can easily be controlled for, through instrumental variables. By reporting either ATT or LATE we can control for or eliminate heterogenous error, reporting only the effects on the group as defined.
Economists, when using data, may have a number of explanatoryvariables they want to use that are highly collinear, such that researcher bias may be important in variable selection. Leamer argues that economists can mitigate this by running statistical tests with different specified models and discarding any inferences which prove to be "fragile", concluding that "professionals ... properly withhold belief until an inference can be shown to be adequately insensitive to the choice of assumptions."[33] Today, this is known asp-hacking, and is not a failure of econometric methodology, but is instead a potential failure of a researcher who may be seeking to prove their own hypothesis.[34] P-hacking is not accepted in economics, and the requirement to disclose original data and the code to perform statistical analysis.[35] HoweverSala-I-Martin[36] argued, it's possible to specify two models suggesting contrary relation between two variables. The phenomenon was labeledemerging recalcitrant result phenomenon byRobert Goldfarb.[37] This is known astwo-way causality, and should be discussed with respect to the underlying theory that the mechanism is attempting to capture.
Kennedy (1998, p 1-2) reports econometricians as being accused ofusing sledgehammers to crack open peanuts. That is they use a wide range of complex statistical techniqueswhile turning a blind eye to data deficiencies and the many questionable assumptions required for the application of these techniques.[38] Kennedy quotesStefan Valavanis's 1959 Econometrics textbook's critique of practice:
Econometric theory is like an exquisitely balanced French recipe, spelling out precisely with how many turns to mix the sauce, how many carats of spice to add, and for how many milliseconds to bake the mixture at exactly 474 degrees of temperature. But when the statistical cook turns to raw materials, he finds that hearts of cactus fruit are unavailable, so he substitutes chunks of cantaloupe; where the recipe calls for vermicelli he used shredded wheat; and he substitutes green garment die for curry, ping-pong balls for turtles eggs, and for Chalifougnac vintage 1883, a can of turpentine. (1959, p.83)[39]
Looking primarily at macroeconomics,Lawrence Summers has criticized econometric formalism, arguing that "the empirical facts of which we are most confident and which provide the most secure basis for theory are those that require the least sophisticated statistical analysis to perceive."Summers is not critiquing the methodology itself but instead its usefulness in developing macroeconomic theory.
He looks at two well cited macroeconometric studies (Hansen &Singleton (1982, 1983), andBernanke (1986)), and argues that while both make brilliant use of econometric methods, both papers do not speak to formal theoretical proof. Noting that in the natural sciences, "investigators rush to check out the validity of claims made by rival laboratories and then build on them," Summers points out that this rarely happen in economics, which to him is a result of the fact that "the results [of econometric studies] are rarely an important input to theory creation or the evolution of professional opinion more generally." To Summers:[40]
Successful empirical research has been characterized by attempts to gauge the strength of associations rather than to estimate structural parameters, verbal characterizations of how causal relations might operate rather than explicit mathematical models, and the skillful use of carefully chosen natural experiments rather than sophisticated statistical technique to achieve identification.
Robert Lucas criticised the use of overly simplistic econometric models of the macroeconomy to predict the implications ofeconomic policy, arguing that the structural relationships observed in historical models break down if decision makers adjust their preferences to reflect policy changes. Lucas argued that policy conclusions drawn from contemporarylarge-scale macroeconometric models were invalid as economic actors would change their expectations of the future and adjust their behaviour accordingly. Good macroeconometric model should incorporatemicrofoundations to model the effects of policy change, with equations representing economicrepresentative agents responding to economic changes based onrational expectations of the future; implying their pattern of behaviour might be quite different if economic policy changed.
The current-dayAustrian School of Economics typically rejects much of econometric modeling. The historical data used to make econometric models, they claim, represents behavior under circumstances idiosyncratic to the past; thus econometric models show correlational, not causal, relationships. Econometricians have addressed this criticism by adopting quasi-experimental methodologies. Austrian school economists remain skeptical of these corrected models, continuing in their belief that statistical methods are unsuited for the social sciences.[41]
The Austrian School holds that the counterfactual must be known for a causal relationship to be established. The changes due to the counterfactual could then be extracted from the observed changes, leaving only the changes caused by the variable. Meeting this critique is very challenging since "there is no dependable method for ascertaining the uniquely correct counterfactual" for historical data.[42] For non-historical data, the Austrian critique is met withrandomized controlled trials. Randomized controlled trials must be purposefully prepared, which historical data is not.[43] The use of randomized controlled trials is becoming more common in social science research. In the United States, for example, the Education Sciences Reform Act of 2002 made funding for education research contingent on scientific validity defined in part as "experimental designs using random assignment, when feasible."[44] In answering questions of causation, parametric statistics only addresses the Austrian critique in randomized controlled trials.
If the data is not from a randomized controlled trial, econometricians meet the Austrian critique withquasi-experimental methodologies, including identifying and exploitingnatural experiments. These methodologies attempt to extract the counterfactual post-hoc so that the use of the tools of parametric statistics is justified. Since parametric statistics depends on any observation following a Gaussian distribution, which is only guaranteed by thecentral limit theorem in a randomization methodology, the use of tools such as the confidence interval will be outside of their specification: the amount of selection bias will always be unknown.[45]
^P. A. Samuelson,T. C. Koopmans, andJ. R. N. Stone (1954). "Report of the Evaluative Committee forEconometrica",Econometrica 22(2), p. 142. [pp. 141-146], as described and cited in Pesaran (1987) above.
^abcGreene, William (2012). "Chapter 1: Econometrics".Econometric Analysis (7th ed.). Pearson Education. pp. 47–48.ISBN9780273753568.Ultimately, all of these will require a common set of tools, including, for example, the multiple regression model, the use of moment conditions for estimation, instrumental variables (IV) and maximum likelihood estimation. With that in mind, the organization of this book is as follows: The first half of the text develops fundamental results that are common to all the applications. The concept of multiple regression and the linear regression model in particular constitutes the underlying platform of most modeling, even if the linear model itself is not ultimately used as the empirical specification.
^abGreene, William (2012).Econometric Analysis (7th ed.). Pearson Education. pp. 34,41–42.ISBN9780273753568.
^abWooldridge, Jeffrey (2012). "Chapter 1: The Nature of Econometrics and Economic Data".Introductory Econometrics: A Modern Approach (5th ed.). South-Western Cengage Learning. p. 2.ISBN9781111531041.
^Wooldridge, Jeffrey (2013).Introductory Econometrics, A modern approach. South-Western, Cengage learning.ISBN978-1-111-53104-1.
^Herman O. Wold (1969). "Econometrics as Pioneering in Nonexperimental Model Building",Econometrica, 37(3), pp.369Archived 24 August 2017 at theWayback Machine-381.
^Card, David (1999). "The Causal Effect of Education on Earning". In Ashenfelter, O.; Card, D. (eds.).Handbook of Labor Economics. Amsterdam: Elsevier. pp. 1801–1863.ISBN978-0444822895.
^"Home".www.econometricsociety.org. Retrieved14 February 2024.
^abLeamer, Edward (March 1983). "Let's Take the Con out of Econometrics".American Economic Review.73 (1):31–43.JSTOR1803924.
^Gordon Tullock, "A Comment on Daniel Klein's 'A Plea to Economists Who Favor Liberty'",Eastern Economic Journal, Spring 2001, note 2 (Text: "As Ronald Coase says, 'if you torture the data long enough it will confess'." Note: "I have heard him say this several times. So far as I know he has never published it.")
^Goldfarb, Robert S. (December 1997). "Now you see it, now you don't: emerging contrary results in economics".Journal of Economic Methodology.4 (2):221–244.doi:10.1080/13501789700000016.ISSN1350-178X.
^Kennedy, P (1998)A Guide to Econometrics, Blackwell, 4th Edition
^Valavanis, Stefan (1959)Econometrics, McGraw-Hill
^Garrison, Roger - in The Meaning of Ludwig von Mises: Contributions is Economics, Sociology, Epistemology, and Political Philosophy, ed. Herbener, pp. 102-117."Mises and His Methods"
Kmenta, J. (2025). Econometrics: A failed science?. In International Encyclopedia of Statistical Science (pp. 773-776). Springer, Berlin, Heidelberg.
Hendry, D. F. (2000). Econometrics: alchemy or science?: essays in econometric methodology. OUP Oxford.[1]
Moosa, I. A. (2017). Econometrics as a con art: exposing the limitations and abuses of econometrics. Edward Elgar Publishing.
Pinto, H. (2011). The role of econometrics in economic science: An essay about the monopolization of economic methodology by econometric methods. The Journal of Socio-Economics, 40(4), 436-443.
Swann, G. P. (2006). Putting econometrics in its place: a new direction in applied economics. Edward Elgar Publishing.[2]
^Hansen, Bruce E. “Methodology: Alchemy or Science?” The Economic Journal, vol. 106, no. 438, 1996, pp. 1398–413. JSTOR,https://doi.org/10.2307/2235531. Accessed 22 Oct. 2025.
^Adkisson, Richard V. (2008) "Putting Econometrics in its Place: A New Direction in Applied Economics." Review of Social Economy, 127-129. , Vol. 66, No. 1,