TheBerndt–Hall–Hall–Hausman (BHHH)algorithm is anumerical optimizationalgorithm similar to theNewton–Raphson algorithm, but it replaces the observed negativeHessian matrix with theouter product of thegradient. This approximation is based on theinformation matrix equality and therefore only valid while maximizing alikelihood function.[1] The BHHH algorithm is named after the four originators:Ernst R. Berndt,Bronwyn Hall,Robert Hall, andJerry Hausman.[2]
If anonlinear model is fitted to thedata one often needs to estimatecoefficients throughoptimization. A number of optimization algorithms have the following general structure. Suppose that the function to be optimized isQ(β). Then the algorithms are iterative, defining a sequence of approximations,βk given by
where is the parameter estimate at step k, and is a parameter (called step size) which partly determines the particular algorithm. For the BHHH algorithmλk is determined by calculations within a given iterative step, involving a line-search until a pointβk+1 is found satisfying certain criteria. In addition, for the BHHH algorithm,Q has the form
andA is calculated using
In other cases, e.g.Newton–Raphson, can have other forms. The BHHH algorithm has the advantage that, if certain conditions apply, convergence of the iterative procedure is guaranteed.[citation needed]