This article is about statistics and government policy. For Nazi analogies in internet discussions, seeGodwin's law.
Goodhart's law is anadage that has been stated as, "When a measure becomes a target, it ceases to be a good measure".[1] It is named after British economistCharles Goodhart, who is credited with expressing the core idea of the adage in a 1975 article onmonetary policy in the United Kingdom:[2]
Any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes.[3]
Charles Goodhart, for whom the adage is named, delivering a speech in 2012
Numerous concepts are related to this idea, at least one of which predates Goodhart's statement.[4] Notably,Campbell's law likely has precedence, as Jeff Rodamar has argued, since various formulations date to 1969.[5] Other academics had similar insights at the time.Jerome Ravetz's 1971 bookScientific Knowledge and Its Social Problems[6] also predates Goodhart, though it does not formulate the same law. Ravetz discusses how systems in general can begamed, focusing on cases where the goals of a task are complex, sophisticated, or subtle. In such cases, the persons possessing the skills to execute the tasks properly seek their own goals to the detriment of the assigned tasks. When the goals are instantiated as metrics, this could be seen as equivalent to Goodhart and Campbell's claim.
Shortly after Goodhart's publication, others suggested closely related ideas, including theLucas critique (1976). As applied ineconomics, the law is also implicit in the idea ofrational expectations, a theory in economics that states that those who are aware of a system of rewards and punishments will optimize their actions within that system to achieve their desired results. For example, if an employee is rewarded by the number of cars sold each month, they will try to sell more cars, even at a loss.
While it originated in the context of market responses, the law has profound implications for the selection of high-level targets in organizations.[3]Jon Danielsson states the law as
Any statistical relationship will break down when used for policy purposes.
A risk model breaks down when used for regulatory purposes.[7]
Mario Biagioli related the concept to consequences of usingcitation impact measures to estimate the importance of scientific publications:[8][9]
All metrics of scientific evaluation are bound to be abused. Goodhart's law [...] states that when a feature of the economy is picked as an indicator of the economy, then it inexorably ceases to function as that indicator because people start to game it.
Later writers generalized Goodhart's point about monetary policy into a more general adage about measures and targets in accounting and evaluation systems. In a book chapter published in 1996, Keith Hoskin wrote:
'Goodhart's Law' – That every measure which becomes a target becomes a bad measure – is inexorably, if ruefully, becoming recognized as one of the overriding laws of our times. Ruefully, for thislaw of unintended consequences seems so inescapable. But it does so, I suggest, because it is the inevitable corollary of that invention of modernity: accountability.[10][full citation needed]
In a 1997 paper on the misuse of accountability models in education, anthropologistMarilyn Strathern cited Hoskins expressing Goodhart's Law as "When a measure becomes a target, it ceases to be a good measure", and linked the sentiment to the history of accountability stretching back into Britain in the 1800s:
When a measure becomes a target, it ceases to be a good measure. The more a2.1 examination performance becomes an expectation, the poorer it becomes as a discriminator of individual performances. Hoskin describes this as 'Goodhart's law', after the latter's observation on instruments for monetary control which led to other devices for monetary flexibility having to be invented. However, targets that seem measurable become enticing tools for improvement. The linking of improvement to commensurable increase produced practices of wide application. It was that conflation of 'is' and 'ought', alongside the techniques of quantifiable written assessments, which led in Hoskin's view to the modernist invention of accountability. This was articulated in Britain for the first time around 1800 as 'the awful idea of accountability' (Ref. 3, p. 268).[1]
TheSan Francisco Declaration on Research Assessment denounces several problems in science and as Goodhart's law explains, one of them is that measurement has become a target. The correlation betweenh-index and scientific awards is decreasing since widespread usage of h-index.[11]
Inhealthcare, the misapplication of metrics can lead to adverse outcomes. For instance, hospitals striving to reduce length of stay (LOS) may inadvertently discharge patients prematurely, leading to increased emergency readmissions.[14][self-published source]
According to Tom and David Chivers inHow to Read Numbers, the law applied to theBritish government response to the COVID-19 pandemic when it announced a target of 100,000 COVID-19 tests per day—initially a target for tests actually carried out and later for maximum capacity of test-taking. The number of useful diagnostic tests was far lower than the government-reported number when it announced it had met the target.[15]
It was used to criticize the BritishThatcher government for trying to conduct monetary policy on the basis of targets forbroad and narrow money,[16] but the law reflects a much more general phenomenon.[17]
Campbell's law – "The more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures"
Cobra effect – when incentives designed to solve a problem end up rewarding people for making it worse
Confirmation bias – the tendency to search for and recall information that confirms or supports one's prior beliefs
Gaming the system – manipulating rules and procedures to obtain a desired outcome
Hawthorne effect – when people modify an aspect of their behavior in response to their awareness of being observed
Lucas critique – the observation that it is naive to try to predict the effects of a change in economic policy entirely on the basis of relationships observed in historical data
^Ravetz, Jerome R. (1971).Scientific knowledge and its social problems. New Brunswick, New Jersey: Transaction Publishers. pp. 295–296.ISBN1-56000-851-2.OCLC32779931.
Malone, Kenny; Gonzalez, Sarah; Horowitz-Ghazi, Alexi; Goldmark, Alex (21 November 2018)."The Laws Of The Office".Planet Money (Podcast).NPR. Retrieved3 July 2020.