smooth package has a mechanism of treating the data withzero values. This might be useful for cases of intermittent demand (thedemand that happens at random). All the univariate functions in thepackage have a parameteroccurrence that allows handlingthis type of data. The canonical model, used insmooth, iscalled “iETS” - intermittent exponential smoothing model. This vignetteexplains how the iETS model and its occurrence part are implemented inthesmooth package.
The canonical general iETS model (called iETS\(_G\)) can be summarised as:\[\begin{equation} \label{eq:iETS} \tag{1} \begin{matrix} y_t = o_t z_t \\ o_t \sim \text{Bernoulli} \left(p_t \right) \\ p_t = f(\mu_{a,t}, \mu_{b,t}) \\ a_t = w_a(v_{a,t-L}) + r_a(v_{a,t-L}) \epsilon_{a,t} \\ v_{a,t} = f_a(v_{a,t-L}) + g_a(v_{a,t-L}) \epsilon_{a,t} \\ b_t = w_a(v_{b,t-L}) + r_a(v_{b,t-L}) \epsilon_{b,t} \\ v_{b,t} = f_a(v_{b,t-L}) + g_a(v_{b,t-L}) \epsilon_{b,t} \end{matrix},\end{equation}\] where\(y_t\)is the observed values,\(z_t\) is thedemand size, which is a pure multiplicative ETS model on its own,\(w(\cdot)\) is the measurement function,\(r(\cdot)\) is the error function,\(f(\cdot)\) is the transition functionand\(g(\cdot)\) is the persistencefunction (the subscripts allow separating the functions for differentparts of the model). These four functions define how the elements of thevector\(v_{t}\) interact with eachother. Furthermore,\(\epsilon_{a,t}\)and\(\epsilon_{b,t}\) are the mutuallyindependent error terms that follow unknown distribution,\(o_t\) is the binary occurrence variable (1- demand is non-zero, 0 - no demand in the period\(t\)) which is distributed according toBernoulli with probability\(p_t\).\(\mu_{a,t}\) and\(\mu_{b,t}\) are the conditionalexpectations for the unobservable variables\(a_t\) and\(b_t\). Any ETS model can be used for\(a_t\) and\(b_t\), and the transformation of them intothe probability\(p_t\) depends on thetype of the error. The general formula for the multiplicative error is:\[\begin{equation} \label{eq:oETS(MZZ)} p_t = \frac{\mu_{a,t}}{\mu_{a,t}+\mu_{b,t}} ,\end{equation}\] while for the additive error it is:\[\begin{equation} \label{eq:oETS(AZZ)} p_t = \frac{\exp(\mu_{a,t})}{\exp(\mu_{a,t})+\exp(\mu_{b,t})} .\end{equation}\] This is because both\(\mu_{a,t}\) and\(\mu_{b,t}\) need to be positive, and theadditive error models support the real plane. The canonical iETS modelassumes that the pure multiplicative model is used for the both\(a_t\) and\(b_t\). This type of model is positivelydefined for any values of error, trend and seasonality, which isessential for the values of\(a_t\) and\(b_t\) and their expectations. If acombination of additive and multiplicative error models is used, thenthe additive part is exponentiated prior to the usage of the formulaefor the calculation of the probability.
An example of an iETS model is the basic local-level modeliETS(M,N,N)\(_G\)(M,N,N)(M,N,N):\[\begin{equation} \label{eq:iETSGExample} \begin{matrix} y_t = o_t z_t \\ z_t = l_{z,t-1} \left(1 + \epsilon_{z,t} \right) \\ l_{z,t} = l_{z,t-1}( 1 + \alpha_{z} \epsilon_{z,t}) \\ (1 + \epsilon_{t}) \sim \text{log}\mathcal{N}(0,\sigma_\epsilon^2) \\ \\ o_t \sim \text{Bernoulli} \left(p_t \right) \\ p_t = \frac{\mu_{a,t}}{\mu_{a,t}+\mu_{b,t}} \\ \\ a_t = l_{a,t-1} \left(1 + \epsilon_{a,t} \right) \\ l_{a,t} = l_{a,t-1}( 1 + \alpha_{a} \epsilon_{a,t}) \\ \mu_{a,t} = l_{a,t-1} \\ \\ b_t = l_{b,t-1} \left(1 + \epsilon_{b,t} \right) \\ l_{b,t} = l_{b,t-1}( 1 + \alpha_{b} \epsilon_{b,t}) \\ \mu_{b,t} = l_{b,t-1} \\ \end{matrix},\end{equation}\] where\(l_{a,t}\) and\(l_{b,t}\) are the levels for each of theshape parameters and\(\alpha_{a}\) and\(\alpha_{b}\) are the smoothingparameters and the error terms\(1+\epsilon_{a,t}\) and\(1+\epsilon_{b,t}\) are positive and havemeans of one. We do not make any other distributional assumptionsconcerning the error terms. More advanced models can be constructing byspecifying the ETS models for each part and / or adding explanatoryvariables.
In the notation of the model iETS(M,N,N)\(_G\)(M,N,N)(M,N,N), the first bracketsdescribe the ETS model for the demand sizes, the underscore letterpoints out at the specific subtype of model (see below), the secondbrackets describe the ETS model, underlying the variable\(a_t\) and the last ones stand for the modelfor the\(b_t\). If only one variableis needed (either\(a_t\) or\(b_t\)), then the redundant brackets aredropped, so that the notation simplifies, for example, to:iETS(M,N,N)\(_O\)(M,N,N). If the sametype of model is used for both demand sizes and demand occurrence, thenthe second brackets can be dropped as well, simplifying the view to:iETS(M,N,N)\(_G\). Furthermore, thenotation without any brackets, such as iETS\(_G\) stands for a general class of aspecific subtype of iETS model (so any error / trend / seasonality).Also, given that iETS\(_G\) is the mostgeneral model of all iETS models, the “\(G\)” can be dropped, when the propertiesare applicable to all subtypes. Finally, the “oETS” notation is usedwhen the occurrence part of the model is discussed explicitly, skippingthe demand sizes.
The concentrated likelihood function for the iETS model is:\[\begin{equation}\label{eq:LogNormalConcentratedLogLikelihood} \tag{2} \ell(\boldsymbol{\theta}, \hat{\sigma}_\epsilon^2 | \textbf{Y}) = -\frac{1}{2} \left( T \log(2 \pi e \hat{\sigma}_\epsilon^2) + T_0 \right)- {\sum_{o_t=1}} \log(z_t) + {\sum_{o_t=1}} \log(\hat{p}_t) +{\sum_{o_t=0}} \log(1-\hat{p}_t) ,\end{equation}\] where\(\textbf{Y}\) is the vector of all thein-sample observations,\(\boldsymbol{\theta}\) is the vector ofparameters to estimate (initial values and smoothing parameters),\(T\) is the number of all observations,\(T_0\) is the number of zeroobservations,\(\hat{\sigma_\epsilon}^2 =\frac{1}{T} \sum_{o_t=1} \log^2 \left(1 + \epsilon_t \right)\) isthe scale parameter of the one-step-ahead forecast error for the demandsizes and\(\hat{p}_t\) is theestimated probability of a non-zero demand at time\(t\). This likelihood is used for theestimation of all the special cases of the iETS\(_G\) model.
Depending on the restrictions on\(a_t\) and\(b_t\), there can be several iETSmodels:
Depending on the type of the model, there are different mechanisms ofthe model construction, error calculation, update of the states and thegeneration of forecasts.
In this vignette we will use ETS(M,N,N) model as a base for thedifferent parts of the models. Although, this is a simplification, itallows better understanding the basics of the different types of iETSmodel, without the loss of generality.
We will use an artificial data in order to see how the functionswork:
All the models, discussed in this vignette, are implemented in thefunctionsoes() andoesg(). The only missingelement in all of this at the moment is the model selection mechanismfor the demand occurrence part. So neitheroes() noroesg() currently support “ZZZ” ETS models.
In case of the fixed\(a_t\) and\(b_t\), the iETS\(_G\) model reduces to:\[\begin{equation} \label{eq:ISSETS(MNN)Fixed}\tag{3} \begin{matrix} y_t = o_t z_t \\ o_t \sim \text{Bernoulli}(p) \end{matrix} .\end{equation}\]
The conditional h-steps ahead mean of the demand occurrenceprobability is calculated as:\[\begin{equation} \label{eq:pt_fixed_expectation} \hat{o}_{t+h|t} = \hat{p} .\end{equation}\]
The estimate of the probability\(p\) is calculated based on the maximisationof the following concentrated log-likelihood function:\[\begin{equation}\label{eq:ISSETS(MNN)FixedLikelihoodSmooth} \ell \left({p} | o_t \right) = T_1 \log {p} + T_0 \log (1-{p}) ,\end{equation}\] where\(T_0\)is the number of zero observations and\(T_1\) is the number of non-zeroobservations in the data. The number of estimated parameters in thiscase is equal to\(k_z+1\), where\(k_z\) is the number of parameters for thedemand sizes part, and 1 is for the estimation of the probability\(p\). Maximising this likelihood deems theanalytical solution for the\(p\):\[\begin{equation}\label{eq:ISSETS(MNN)FixedLikelihoodSmoothProbability} \hat{p} = \frac{T_1}{T},\end{equation}\] where\(T_1\)is the number of non-zero observations and\(T\) is the number of all the availableobservations.
The occurrence part of the model oETS\(_F\) is constructed usingoes() function:
## Occurrence state space model estimated: Fixed probability## Underlying ETS model: oETS[F](MNN)## Vector of initials:## level ## 0.6818 ## ## Error standard deviation: 1.0855## Sample size: 110## Number of estimated parameters: 1## Number of degrees of freedom: 109## Information criteria: ## AIC AICc BIC BICc ## 139.6081 139.6451 142.3086 142.3956The occurrence part of the model is supported byadam()function. For example, here’s how the iETS(M,M,N)\(_F\) can be constructed:
## Time elapsed: 0.02 seconds## Model estimated using adam() function: iETS(MMN)[F]## With backcasting initialisation## Occurrence model type: Fixed probability## Distribution assumed in the model: Mixture of Bernoulli and Gamma## Loss function type: likelihood; Loss function value: 135.9592## Persistence vector g:## alpha beta ## 0.0069 0.0000 ## ## Sample size: 110## Number of estimated parameters: 4## Number of degrees of freedom: 106## Information criteria:## AIC AICc BIC BICc ## 417.5264 417.7528 428.3283 424.1600 ## ## Forecast errors:## Asymmetry: 69.278%; sMSE: 59.163%; rRMSE: 1.016; sPIS: -1391.751%; sCE: 421.452%The odds-ratio iETS uses only one model for the occurrence part, forthe\(\mu_{a,t}\) variable (setting\(\mu_{b,t}=1\)), which simplifies theiETS\(_G\) model. For example, for theiETS\(_O\)(M,N,N):\[\begin{equation} \label{eq:iETSO} \tag{5} \begin{matrix} y_t = o_t z_t \\ o_t \sim \text{Bernoulli} \left(p_t \right) \\ p_t = \frac{\mu_{a,t}}{\mu_{a,t}+1} \\ a_t = l_{a,t-1} \left(1 + \epsilon_{a,t} \right) \\ l_{a,t} = l_{a,t-1}( 1 + \alpha_{a} \epsilon_{a,t}) \\ \mu_{a,t} = l_{a,t-1} \end{matrix}.\end{equation}\]
In the estimation of the model, the initial level is set to thetransformed mean probability of occurrence\(l_{a,0}=\frac{\bar{p}}{1-\bar{p}}\) formultiplicative error model and\(l_{a,0} =\log l_{a,0}\) for the additive one, where\(\bar{p}=\frac{1}{T} \sum_{t=1}^T o_t\), theinitial trend is equal to 0 in case of the additive and 1 in case of themultiplicative types. In cases of seasonal models, the regression withdummy variables is fitted, and its parameters are then used for theinitials of the seasonal indices after the transformations similar tothe level ones.
The construction of the model is done via the following set ofequations (example with oETS\(_O\)(M,N,N)):\[\begin{equation} \label{eq:iETSOEstimation} \begin{matrix} \hat{p}_t = \frac{\hat{a}_t}{\hat{a}_t+1} \\ \hat{a}_t = l_{a,t-1} \\ l_{a,t} = l_{a,t-1}( 1 + \alpha_{a} e_{a,t}) \\ 1+e_{a,t} = \frac{u_t}{1-u_t} \\ u_{t} = \frac{1 + o_t - \hat{p}_t}{2} \end{matrix},\end{equation}\] where\(\hat{a}_t\) is the estimate of\(\mu_{a,t}\).
Given that the model is estimated using the likelihood (2), it has\(k_z+k_a\) parameters to estimate,where\(k_z\) includes all the initialvalues, the smoothing parameters and the scale of the error of thedemand sizes part of the model, and\(k_a\) includes only initial values and thesmoothing parameters of the model for the demand occurrence. In case ofiETS\(_O\)(M,N,N) this number is equalto 5.
The occurrence part of the model iETS\(_O\) is constructed using the very sameoes() function, but also allows specifying the ETS model touse. For example, here’s the ETS(M,M,N) model:
## Occurrence state space model estimated: Odds ratio## Underlying ETS model: oETS[O](MMN)## Smoothing parameters:## level trend ## 0.0014 0.0014 ## Vector of initials:## level trend ## 0.6892 0.9887 ## ## Error standard deviation: 1.22## Sample size: 110## Number of estimated parameters: 4## Number of degrees of freedom: 106## Information criteria: ## AIC AICc BIC BICc ## 119.5774 119.9583 130.3793 131.2746And here’s the full iETS(M,M,N)\(_O\) model:
## Time elapsed: 0.08 seconds## Model estimated using adam() function: iETS(MMN)[O]## With backcasting initialisation## Occurrence model type: Odds ratio## Distribution assumed in the model: Mixture of Bernoulli and Gamma## Loss function type: likelihood; Loss function value: 135.9592## Persistence vector g:## alpha beta ## 0.0069 0.0000 ## ## Sample size: 110## Number of estimated parameters: 7## Number of degrees of freedom: 103## Information criteria:## AIC AICc BIC BICc ## 397.4957 397.7221 416.3991 398.1293 ## ## Forecast errors:## Asymmetry: -54.404%; sMSE: 44.329%; rRMSE: 0.88; sPIS: 2085.154%; sCE: -232.578%This should give the same results as before, meaning that we askexplicitly for theadam() function to use the earlierestimated model:
This gives an additional flexibility, because the construction can bedone in two steps, with a more refined model for the occurrence part(e.g. including explanatory variables).
Similarly to the odds-ratio iETS, inverse-odds-ratio model uses onlyone model for the occurrence part, but for the\(\mu_{b,t}\) variable instead of\(\mu_{a,t}\) (now\(\mu_{a,t}=1\)). Here is an example ofiETS\(_I\)(M,N,N):\[\begin{equation} \label{eq:iETSI} \tag{6} \begin{matrix} y_t = o_t z_t \\ o_t \sim \text{Bernoulli} \left(p_t \right) \\ p_t = \frac{1}{1+\mu_{b,t}} \\ b_t = l_{b,t-1} \left(1 + \epsilon_{b,t} \right) \\ l_{b,t} = l_{b,t-1}( 1 + \alpha_{b} \epsilon_{b,t}) \\ \mu_{b,t} = l_{b,t-1} \end{matrix}.\end{equation}\]
This model resembles the logistic regression, where the probabilityis obtained from an underlying regression model of\(x_t'A\). In the estimation of themodel, the initial level is set to the transformed mean probability ofoccurrence\(l_{b,0}=\frac{1-\bar{p}}{\bar{p}}\) formultiplicative error model and\(l_{b,0} =\log l_{b,0}\) for the additive one, where\(\bar{p}=\frac{1}{T} \sum_{t=1}^T o_t\), theinitial trend is equal to 0 in case of the additive and 1 in case of themultiplicative types. The seasonality is treated similar to theiETS\(_O\) model, but using theinverse-odds transformation.
The construction of the model is done via the set of equationssimilar to the ones for the iETS\(_O\)model:\[\begin{equation}\label{eq:iETSIEstimation} \begin{matrix} \hat{p}_t = \frac{1}{1+\hat{b}_t} \\ \hat{b}_t = l_{b,t-1} \\ l_{b,t} = l_{b,t-1}( 1 + \alpha_{b} e_{b,t}) \\ 1+e_{b,t} = \frac{1-u_t}{u_t} \\ u_{t} = \frac{1 + o_t - \hat{p}_t}{2} \end{matrix},\end{equation}\] where\(\hat{b}_t\) is the estimate of\(\mu_{b,t}\).
So the model iETS\(_I\) is like amirror reflection of the model iETS\(_O\). However, it produces differentforecasts, because it focuses on the probability of non-occurrence,rather than the probability of occurrence. Interestingly enough, theprobability of occurrence\(p_t\) canalso be estimated if\(1+b_t\) in thedenominator is set to be equal to the demand intervals (between thedemand occurrences). The model (6) underlies Croston’s method in thiscase.
Once againoes() function is used in the construction ofthe model:
## Occurrence state space model estimated: Inverse odds ratio## Underlying ETS model: oETS[I](MMN)## Smoothing parameters:## level trend ## 0 0 ## Vector of initials:## level trend ## 2.8536 0.9639 ## ## Error standard deviation: 2.7498## Sample size: 110## Number of estimated parameters: 4## Number of degrees of freedom: 106## Information criteria: ## AIC AICc BIC BICc ## 120.8518 121.2327 131.6537 132.5490And here’s the full iETS(M,M,N)\(_O\) model:
## Time elapsed: 0.08 seconds## Model estimated using adam() function: iETS(MMN)## With backcasting initialisation## Occurrence model type: Inverse odds ratio## Distribution assumed in the model: Mixture of Bernoulli and Gamma## Loss function type: likelihood; Loss function value: 135.9592## Persistence vector g:## alpha beta ## 0.0069 0.0000 ## ## Sample size: 110## Number of estimated parameters: 7## Number of degrees of freedom: 103## Information criteria:## AIC AICc BIC BICc ## 398.7701 398.9965 417.6734 399.4037 ## ## Forecast errors:## Asymmetry: -48.031%; sMSE: 42.546%; rRMSE: 0.862; sPIS: 1850.674%; sCE: -189.787%Once again, an earlier estimated model can be used in the univariateforecasting functions:
This model appears, when a specific restriction is imposed:\[\begin{equation} \label{eq:iETSGRestriction} \mu_{a,t} + \mu_{b,t} = 1, \mu_{a,t} \in [0, 1]\end{equation}\] The pure multiplicative iETS\(_G\)(M,N,N) model is then transformed intoiETS\(_D\)(M,N,N):\[\begin{equation} \label{eq:iETSD} \tag{7} \begin{matrix} y_t = o_t z_t \\ o_t \sim \text{Bernoulli} \left(a_t \right) \\ a_t = l_{a,t-1} \left(1 + \epsilon_{a,t} \right) \\ l_{a,t} = l_{a,t-1}( 1 + \alpha_{a} \epsilon_{a,t}) \\ \mu_{a,t} = \min(l_{a,t-1}, 1) \end{matrix}.\end{equation}\] An option with the additive model in this casehas a different, more complicated form:\[\begin{equation} \label{eq:iETSDAdditive} \begin{matrix} y_t = o_t z_t \\ o_t \sim \text{Bernoulli} \left(a_t \right) \\ a_t = l_{a,t-1} + \epsilon_{a,t} \\ l_{a,t} = l_{a,t-1} + \alpha_{a} \epsilon_{a,t} \\ \mu_{a,t} = \max \left( \min(l_{a,t-1}, 1), 0 \right) \end{matrix}.\end{equation}\]
The estimation of the multiplicative error model is done using thefollowing set of equations:\[\begin{equation}\label{eq:ISSETS(MNN)_probability_estimate} \begin{matrix} \hat{y}_t = o_t \hat{l}_{z,t-1} \\ \hat{l}_{z,t} = \hat{l}_{z,t-1}( 1 + \alpha e_t) \\ \hat{a}_t = min(\hat{l}_{a,t-1}, 1) \\ \hat{l}_{a,t} = \hat{l}_{a,t-1}( 1 + \alpha_{a} e_{a,t}) \end{matrix},\end{equation}\] where\[\begin{equation}\label{eq:ISSETS(MNN)_TSB_model_error_approximation} e_{a,t} = \frac{o_t (1 - 2 \kappa) + \kappa - \hat{a}_t}{\hat{a}_t},\end{equation}\] and\(\kappa\)is a very small number (for example,\(\kappa= 10^{-10}\)), needed only in order to make the model estimable.The estimate of the error term in case of the additive model is muchsimpler and does not need any specific tricks to work:\[\begin{equation}\label{eq:ISSETS(MNN)_TSB_model_error_approximation2} e_{a,t} = o_t - \hat{a}_t .\end{equation}\]
The initials of the iETS\(_D\) modelare calculated directly from the data without any additionaltransformations
Here’s an example of the application of the model to the sameartificial data:
## Occurrence state space model estimated: Direct probability## Underlying ETS model: oETS[D](MMN)## Smoothing parameters:## level trend ## 1e-04 1e-04 ## Vector of initials:## level trend ## 0.3391 1.0119 ## ## Error standard deviation: 0.8171## Sample size: 110## Number of estimated parameters: 4## Number of degrees of freedom: 106## Information criteria: ## AIC AICc BIC BICc ## 116.0814 116.4623 126.8833 127.7786The usage of the model in case of univariate forecasting functions isthe same as in the cases of other occurrence models, discussedabove:
## Time elapsed: 0.02 seconds## Model estimated using adam() function: iETS(MMN)[D]## With backcasting initialisation## Occurrence model type: Direct## Distribution assumed in the model: Mixture of Bernoulli and Gamma## Loss function type: likelihood; Loss function value: 135.9592## Persistence vector g:## alpha beta ## 0.0069 0.0000 ## ## Sample size: 110## Number of estimated parameters: 3## Number of degrees of freedom: 107## Information criteria:## AIC AICc BIC BICc ## 385.9997 386.2261 394.1011 394.6333 ## ## Forecast errors:## Asymmetry: -60.162%; sMSE: 46.769%; rRMSE: 0.903; sPIS: 2333.765%; sCE: -275.487%This model has already been discussed above and was presented in (1).The estimation of iETS(M,N,N)\(_G\)model is done via the following set of equations:\[\begin{equation} \label{eq:ISSETS(MNN)Estimated} \begin{matrix} \hat{y}_t = o_t \hat{z}_t \\ e_t = o_t \frac{y_t - \hat{z}_t}{\hat{z}_t} \\ \hat{z}_t = \hat{l}_{z,t-1} \\ \hat{l}_{z,t} = \hat{l}_{z,t-1}( 1 + \alpha_z e_t) \\ e_{a,t} = \frac{u_t}{1-u_t} -1 \\ \hat{a}_t = \hat{l}_{a,t-1} \\ \hat{l}_{a,t} = \hat{l}_{a,t-1}( 1 + \alpha_{a} e_{a,t}) \\ e_{b,t} = \frac{1-u_t}{u_t} -1 \\ \hat{b}_t = \hat{l}_{b,t-1} \\ \hat{l}_{b,t} = \hat{l}_{b,t-1}( 1 + \alpha_{b} e_{b,t}) \end{matrix} .\end{equation}\] The initialisation of the parameters of theiETS\(_G\) model is done separately forthe variables\(a_t\) and\(b_t\), based on the principles, describedabove for the iETS\(_O\) and iETS\(_I\).
There is a separate function for this model, calledoesg(). It has twice more parameters thanoes(), because it allows fine tuning of the models for thevariables\(a_t\) and\(b_t\). This gives an additionalflexibility. For example, here is how we can use ETS(M,N,N) for the\(a_t\) and ETS(A,A,N) for the\(b_t\), resulting in oETS\(_G\)(M,N,N)(A,A,N):
## Occurrence state space model estimated: General## Underlying ETS model: oETS[G](MNN)(AAN)## ## Sample size: 110## Number of estimated parameters: 6## Number of degrees of freedom: 104## Information criteria: ## AIC AICc BIC BICc ## 125.3732 126.1887 141.5761 143.4928Theoes() function acceptsoccurrence="g"and in this case calls foroesg() with the same types ofETS models for both parts:
## Occurrence state space model estimated: General## Underlying ETS model: oETS[G](MNN)(MNN)## ## Sample size: 110## Number of estimated parameters: 4## Number of degrees of freedom: 106## Information criteria: ## AIC AICc BIC BICc ## 126.9182 127.2992 137.7201 138.6155Finally, the more flexible way to construct iETS model would be to doit in two steps: either usingoesg() oroes()and then using theadam() with the provided model inoccurrence variable. But a simpler option is available aswell:
## Time elapsed: 0.11 seconds## Model estimated using adam() function: iETS(MMN)[G]## With backcasting initialisation## Occurrence model type: General## Distribution assumed in the model: Mixture of Bernoulli and Gamma## Loss function type: likelihood; Loss function value: 135.9592## Persistence vector g:## alpha beta ## 0.0069 0.0000 ## ## Sample size: 110## Number of estimated parameters: 11## Number of degrees of freedom: 99## Information criteria:## AIC AICc BIC BICc ## 443.3665 443.5929 473.0718 436.0001 ## ## Forecast errors:## Asymmetry: 100%; sMSE: 370.591%; rRMSE: 2.543; sPIS: -9067.561%; sCE: 1809.863%Finally, there is an occurrence type selection mechanism. It triesout all the iETS subtypes of models, discussed above and selects the onethat has the lowest information criterion (i.e. AIC). This subtype iscalled iETS\(_A\) (automatic), althoughit does not represent any specific model. Here’s an example:
## Occurrence state space model estimated: Direct probability## Underlying ETS model: oETS[D](MNN)## Smoothing parameters:## level ## 0.1137 ## Vector of initials:## level ## 0.4363 ## ## Error standard deviation: 0.9398## Sample size: 110## Number of estimated parameters: 2## Number of degrees of freedom: 108## Information criteria: ## AIC AICc BIC BICc ## 122.4506 122.5628 127.8516 128.1152The main restriction of the iETS models at the moment(smooth v.2.5.0) is that there is no model selectionbetween the ETS models for the occurrence part. This needs to be donemanually. Hopefully, this feature will appear in the next release of thepackage.