| Part of a series on |
| Regression analysis |
|---|
| Models |
| Estimation |
| Background |

In statistics,nonlinear regression is a form ofregression analysis in which observational data are modeled by a function which is a nonlinear combination of the model parameters and depends on one or more independent variables. The data are fitted by a method of successive approximations (iterations).
In nonlinear regression, astatistical model of the form,
relates a vector ofindependent variables,, and its associated observeddependent variables,. The function is nonlinear in the components of the vector of parameters, but otherwise arbitrary. For example, theMichaelis–Menten model for enzyme kinetics has two parameters and one independent variable, related by by:[a]
This function, which is a rectangular hyperbola, isnonlinear because it cannot be expressed as alinear combination of the twos.
Systematic error may be present in the independent variables but its treatment is outside the scope of regression analysis. If the independent variables are not error-free, this is anerrors-in-variables model, also outside this scope.
Other examples of nonlinear functions includeexponential functions,logarithmic functions,trigonometric functions,power functions,Gaussian function, andLorentz distributions. Some functions, such as the exponential or logarithmic functions, can be transformed so that they are linear. When so transformed, standard linear regression can be performed but must be applied with caution. See§ Linearization §§ Transformation, below, for more details. In microbiology and biotechnology, nonlinear regression is used to model complex microbial growth kinetics. While simple growth follows monoauxic functions (such as the Gompertz or Boltzmann models), multiphasic (polyauxic) growth is modeled using linear combinations of these nonlinear functions. Estimating parameters for these complex models often requires robust regression techniques (e.g., using a Lorentzian loss function to mitigate outliers) and global optimization algorithms (such asDifferential evolution withL-BFGS-B) to avoid local minima and ensure biologically interpretable results.[1][2]
In general, there is no closed-form expression for the best-fitting parameters, as there is inlinear regression. Usually numericaloptimization algorithms are applied to determine the best-fitting parameters. Again in contrast to linear regression, there may be manylocal minima of the function to be optimized and even the global minimum may produce abiased estimate. In practice,estimated values of the parameters are used, in conjunction with the optimization algorithm, to attempt to find the global minimum of a sum of squares.
For details concerning nonlinear data modeling seeleast squares andnon-linear least squares.
The assumption underlying this procedure is that the model can be approximated by a linear function, namely a first-orderTaylor series:
where are Jacobian matrix elements. It follows from this that the least squares estimators are given by
comparegeneralized least squares with covariance matrix proportional to the unit matrix. The nonlinear regression statistics are computed and used as in linear regression statistics, but usingJ in place ofX in the formulas.
When the function itself is not known analytically, but needs to belinearly approximated from, or more, known values (where is the number of estimators), the best estimator is obtained directly from theLinear Template Fit as[3] (see alsolinear least squares).
The linear approximation introducesbias into the statistics. Therefore, more caution than usual is required in interpreting statistics derived from a nonlinear model.
The best-fit curve is often assumed to be that which minimizes the sum of squaredresiduals. This is theordinary least squares (OLS) approach. However, in cases where the dependent variable does not have constant variance, or there are some outliers, a sum of weighted squared residuals may be minimized; seeweighted least squares. Each weight should ideally be equal to the reciprocal of the variance of the observation, or the reciprocal of the dependent variable to some power in the outlier case,[4] but weights may be recomputed on each iteration, in an iteratively weighted least squares algorithm.
Some nonlinear regression problems can be moved to a linear domain by a suitable transformation of the model formulation.
For example, consider the nonlinear regression problem
with parametersa andb and with multiplicative error termU. If we take the logarithm of both sides, this becomes
whereu = ln(U), suggesting estimation of the unknown parameters by a linear regression of ln(y) onx, a computation that does not require iterative optimization. However, use of a nonlinear transformation requires caution. The influences of the data values will change, as will the error structure of the model and the interpretation of any inferential results. These may not be desired effects. On the other hand, depending on what the largest source of error is, a nonlinear transformation may distribute the errors in a Gaussian fashion, so the choice to perform a nonlinear transformation must be informed by modeling considerations.
ForMichaelis–Menten kinetics, the linearLineweaver–Burk plot
of 1/v against 1/[S] has been much used. However, since it is very sensitive to data error and is strongly biased toward fitting the data in a particular range of the independent variable, [S], its use is strongly discouraged.
For error distributions that belong to theexponential family, a link function may be used to transform the parameters under theGeneralized linear model framework.

Theindependent orexplanatory variable (say X) can be split up into classes or segments andlinear regression can be performed per segment. Segmented regression withconfidence analysis may yield the result that thedependent orresponse variable (say Y) behaves differently in the various segments.[5] For example, the figure shows that thesoil salinity (X) initially exerts no influence on thecrop yield (Y) of mustard, until acritical orthreshold value (breakpoint), after which the yield is affected negatively.[6]