| Type: | Package |
| Title: | Forecastable Component Analysis |
| Version: | 0.2.7 |
| Date: | 2020-06-21 |
| URL: | https://github.com/gmgeorg/ForeCA |
| Description: | Implementation of Forecastable Component Analysis ('ForeCA'), including main algorithms and auxiliary function (summary, plotting, etc.) to apply 'ForeCA' to multivariate time series data. 'ForeCA' is a novel dimension reduction (DR) technique for temporally dependent signals. Contrary to other popular DR methods, such as 'PCA' or 'ICA', 'ForeCA' takes time dependency explicitly into account and searches for the most ”forecastable” signal. The measure of forecastability is based on the Shannon entropy of the spectral density of the transformed signal. |
| Depends: | R (≥ 3.5.0) |
| License: | GPL-2 |
| Imports: | astsa (≥ 1.10), MASS, graphics, reshape2 (≥ 1.4.4), utils |
| Suggests: | psd, fBasics, knitr, markdown, mgcv, nlme (≥ 3.1-64),testthat (≥ 2.0.0), rSFA, |
| RoxygenNote: | 7.1.1 |
| Encoding: | UTF-8 |
| VignetteBuilder: | knitr |
| NeedsCompilation: | no |
| Packaged: | 2020-06-29 01:43:22 UTC; georg |
| Author: | Georg M. Goerg [aut, cre] |
| Maintainer: | Georg M. Goerg <im@gmge.org> |
| Repository: | CRAN |
| Date/Publication: | 2020-06-29 12:40:42 UTC |
Implementation of Forecastable Component Analysis (ForeCA)
Description
Forecastable Component Analysis (ForeCA) is a novel dimension reductiontechnique for multivariate time series\mathbf{X}_t.ForeCA finds a linar combinationy_t = \mathbf{X}_t \mathbf{v} that is easy to forecast. The measure offorecastability\Omega(y_t) (Omega) is based on the entropyof the spectral densityf_y(\lambda) ofy_t: higher entropy meansless forecastable, lower entropy is more forecastable.
The main functionforeca runs ForeCA on amultivariate time series\mathbf{X}_t.
ConsultNEWS.md for a history of release notes.
Author(s)
Author and maintainer: Georg M. Goerg <im@gmge.org>
References
Goerg, G. M. (2013). “Forecastable Component Analysis”.Journal of Machine Learning Research (JMLR) W&CP 28 (2): 64-72, 2013.Available athttp://jmlr.org/proceedings/papers/v28/goerg13.html.
Examples
XX <- ts(diff(log(EuStockMarkets)))Omega(XX)plot(log10(lynx))Omega(log10(lynx))## Not run: ff <- foreca(XX, n.comp = 4)ffplot(ff)summary(ff)## End(Not run)Estimate forecastability of a time series
Description
An estimator for the forecastability\Omega(x_t) of a univariate time seriesx_t.Currently it uses a discrete plug-in estimator given the empirical spectrum (periodogram).
Usage
Omega( series = NULL, spectrum.control = list(), entropy.control = list(), mvspectrum.output = NULL)Arguments
series | a univariate time series; if it is multivariate, then |
spectrum.control | list; control settings for spectrum estimation. See |
entropy.control | list; control settings for entropy estimation.See |
mvspectrum.output | an object of class |
Details
Theforecastability of a stationary processx_t is defined as(see References)
\Omega(x_t) = 1 - \frac{ - \int_{-\pi}^{\pi} f_x(\lambda) \log f_x(\lambda) d \lambda }{\log 2 \pi} \in [0, 1]
wheref_x(\lambda) is the normalized spectraldensity ofx_t.In particular \int_{-\pi}^{\pi} f_x(\lambda) d\lambda = 1.
For white noise\varepsilon_t forecastability\Omega(\varepsilon_t) = 0; for a sum of sinusoids it equals100 %.However, empirically it reaches100\% only if the estimated spectrum hasexactly one peak at some\omega_j and\widehat{f}(\omega_k) = 0for allk\neq j.
In practice, a time series of lengthT hasT Fourier frequencieswhich represent a discreteprobability distribution. Hence entropy off_x(\lambda) must benormalized by\log T, not by\log 2 \pi.
Also we can use several smoothing techniques to obtain a less variance estimate off_x(\lambda).
Value
A real-value between0 and100 (%).0 means notforecastable (white noise);100 means perfectly forecastable (asinusoid).
References
Goerg, G. M. (2013). “Forecastable ComponentAnalysis”. Journal of Machine Learning Research (JMLR) W&CP 28 (2): 64-72, 2013.Available athttp://jmlr.org/proceedings/papers/v28/goerg13.html.
See Also
spectral_entropy,discrete_entropy,continuous_entropy
Examples
nn <- 100eps <- rnorm(nn) # white noise has Omega() = 0 in theoryOmega(eps, spectrum.control = list(method = "pgram"))# smoothing makes it closer to 0Omega(eps, spectrum.control = list(method = "mvspec"))xx <- sin(seq_len(nn) * pi / 10)Omega(xx, spectrum.control = list(method = "pgram"))Omega(xx, entropy.control = list(threshold = 1/40))Omega(xx, spectrum.control = list(method = "mvspec"), entropy.control = list(threshold = 1/20))# an AR(1) with phi = 0.5yy <- arima.sim(n = nn, model = list(ar = 0.5))Omega(yy, spectrum.control = list(method = "mvspec"))# an AR(1) with phi = 0.9 is more forecastableyy <- arima.sim(n = nn, model = list(ar = 0.9))Omega(yy, spectrum.control = list(method = "mvspec"))List of common arguments
Description
Common arguments used in several functions in this package.
Arguments
series | a |
U | a |
mvspectrum.output | an object of class |
f.U | multivariate spectrum of class |
algorithm.control | list; control settings for anyiterative ForeCA algorithm. See |
entropy.control | list; control settings for entropy estimation.See |
spectrum.control | list; control settings for spectrum estimation. See |
entropy.method | string; method to estimate the entropy from discreteprobabilities |
spectrum.method | string; method for spectrum estimation; see |
threshold | numeric; values of spectral density below |
smoothing | logical; if |
base | logarithm base; entropy is measured in “nats” for |
Completes several control settings
Description
Completes algorithm, entropy, and spectrum control lists.
Usage
complete_algorithm_control( algorithm.control = list(max.iter = 50, num.starts = 10, tol = 0.001, type = "EM"))complete_entropy_control( entropy.control = list(base = NULL, method = "MLE", prior.probs = NULL, prior.weight = 0.001, threshold = 0), num.outcomes)complete_spectrum_control( spectrum.control = list(kernel = NULL, method = c("mvspec", "pspectrum", "ar", "pgram"), smoothing = FALSE))Arguments
algorithm.control | list; control parameters for anyiterative ForeCA algorithm. |
entropy.control | list; control settings for entropy estimation. |
num.outcomes | positive integer; number of outcomes for the discrete probabilitydistribution. Must be specified (no default value). |
spectrum.control | list; control settings for spectrum estimation. |
Value
A list with fully specified algorithm, entropy, or spectrum controls. Default values are only added if the input{spectrum,entropy,algorithm}.control list does not already set this value.
complete_algorithm_control returns a list containing:
max.iter | maximum number of iterations; default: |
num.starts | number of random starts to avoid local optima; default: |
tol | tolerance for when convergence is reached in anyiterative ForeCA algorithm; default: |
type | string; type of algorithm. Default: |
complete_entropy_control returns a list with:
base | logarithm base for the entropy. |
method | string; method to estimate entropy; default: |
prior.probs | prior distribution; default: uniform |
prior.weight | weight of the prior distribution; default: |
threshold | non-negative float; set probabilities below threshold to zero; default: |
complete_spectrum_control returns a list containing:
kernel | R function; function to weigh each Fourier frequency |
method | string; method to estimate the spectrum; default: |
smoothing | logical; default: |
Available methods for spectrum estimation are (alphabetical order)
"ar" | autoregressive spectrum fit via |
"mvspec" | smoothed estimate using |
"pgram" | raw periodogram using |
"pspectrum" | advanced non-parametric estimation of a tapered power spectrum using |
Settingsmoothing = TRUE will smooth the estimated spectrum(again); this option is only available for univariate time series/spectra.
See Also
mvspectrum,discrete_entropy,continuous_entropy
Shannon entropy for a continuous pdf
Description
Computes the Shannon entropy\mathcal{H}(p) for a continuous probability density function (pdf)p(x) using numerical integration.
Usage
continuous_entropy(pdf, lower, upper, base = 2)Arguments
pdf | R function for the pdf |
lower,upper | lower and upper integration limit. |
base | logarithm base; entropy is measured in “nats” for |
Details
The Shannon entropy of a continuous random variable (RV)X \sim p(x) is defined as
\mathcal{H}(p) = -\int_{-\infty}^{\infty} p(x) \log p(x) d x.
Contrary to discrete RVs, continuous RVs can have negative entropy (see Examples).
Value
scalar; entropy value (real).
Sincecontinuous_entropy uses numerical integration (integrate()) convergenceis not garantueed (even if integral in definition of\mathcal{H}(p) exists).Issues a warning ifintegrate() does not converge.
See Also
Examples
# entropy of U(a, b) = log(b - a). Thus not necessarily positive anymore, e.g.continuous_entropy(function(x) dunif(x, 0, 0.5), 0, 0.5) # log2(0.5)# Same, but for U(-1, 1)my_density <- function(x){ dunif(x, -1, 1)}continuous_entropy(my_density, -1, 1) # = log(upper - lower)# a 'triangle' distributioncontinuous_entropy(function(x) x, 0, sqrt(2))Shannon entropy for discrete pmf
Description
Computes the Shannon entropy\mathcal{H}(p) = -\sum_{i=1}^{n} p_i \log p_iof a discrete RVX takingvalues in\lbrace x_1, \ldots, x_n \rbrace with probabilitymass function (pmf)P(X = x_i) = p_i withp_i \geq 0 for alli and\sum_{i=1}^{n} p_i = 1.
Usage
discrete_entropy( probs, base = 2, method = c("MLE"), threshold = 0, prior.probs = NULL, prior.weight = 0)Arguments
probs | numeric; probabilities (empirical frequencies). Must be non-negative and add up to |
base | logarithm base; entropy is measured in “nats” for |
method | string; method to estimate entropy; see Details below. |
threshold | numeric; frequencies below |
prior.probs | optional; only used if |
prior.weight | numeric; how much weight does the prior distribution get in a mixturemodel between data and prior distribution? Must be between |
Details
discrete_entropy uses a plug-in estimator (method = "MLE"):
\widehat{\mathcal{H}}(p) = - \sum_{i=1}^{n} \widehat{p}_i \log \widehat{p}_i.
Ifprior.weight > 0, then it mixes the observed proportions\widehat{p}_iwith a prior distribution
\widehat{p}_i \leftarrow (1-\lambda) \cdot \widehat{p_i} + \lambda \cdot prior_i, \quad i=1, \ldots, n,
where\lambda \in [0, 1] is theprior.weight parameter. By defaultthe prior is a uniform distribution, i.e.,prior_i = \frac{1}{n} for all i.
Note that this plugin estimator is biased. See References for an overview of alternativemethods.
Value
numeric; non-negative real value.
References
Archer E., Park I. M., Pillow J.W. (2014). “Bayesian Entropy Estimation forCountable Discrete Distributions”. Journal of Machine Learning Research (JMLR) 15,2833-2868. Available athttp://jmlr.org/papers/v15/archer14a.html.
See Also
Examples
probs.tmp <- rexp(5)probs.tmp <- sort(probs.tmp / sum(probs.tmp))unif.distr <- rep(1/length(probs.tmp), length(probs.tmp))matplot(cbind(probs.tmp, unif.distr), pch = 19, ylab = "P(X = k)", xlab = "k")matlines(cbind(probs.tmp, unif.distr))legend("topleft", c("non-uniform", "uniform"), pch = 19, lty = 1:2, col = 1:2, box.lty = 0)discrete_entropy(probs.tmp)# uniform has largest entropy among all bounded discrete pmfs# (here = log(5))discrete_entropy(unif.distr)# no uncertainty if one element occurs with probability 1discrete_entropy(c(1, 0, 0))Forecastable Component Analysis
Description
foreca performs Forecastable Component Analysis (ForeCA) on\mathbf{X}_t – aK-dimensional time series withTobservations. Users should only callforeca, rather thanforeca.one_weightvector orforeca.multiple_weightvectors.
foreca.one_weightvector is a wrapper around several algorithms thatsolve the ForeCA optimization problem for a single weightvector\mathbf{w}_iand whitened time series\mathbf{U}_t.
foreca.multiple_weightvectors appliesforeca.one_weightvectoriteratively to\mathbf{U}_t in order to obtain multiple weightvectorsthat yield most forecastable, uncorrelated signals.
Usage
foreca(series, n.comp = 2, algorithm.control = list(type = "EM"), ...)foreca.one_weightvector( U, f.U = NULL, spectrum.control = list(), entropy.control = list(), algorithm.control = list(), keep.all.optima = FALSE, dewhitening = NULL, ...)foreca.multiple_weightvectors( U, spectrum.control = list(), entropy.control = list(), algorithm.control = list(), n.comp = 2, plot = FALSE, dewhitening = NULL, ...)Arguments
series | a |
n.comp | positive integer; number of components to be extracted.Default: |
algorithm.control | list; control settings for anyiterative ForeCA algorithm. See |
... | additional arguments passed to available ForeCA algorithms. |
U | a |
f.U | multivariate spectrum of class |
spectrum.control | list; control settings for spectrum estimation. See |
entropy.control | list; control settings for entropy estimation.See |
keep.all.optima | logical; if |
dewhitening | optional; if provided (returned by |
plot | logical; if |
Value
An object of classforeca, which is similar to the output fromprincomp,with the following components (amongst others):
center: sample mean\widehat{\mu}_Xof eachseries,whitening: whitening matrix of sizeK \times Kfromwhiten:\mathbf{U}_t = (\mathbf{X}_t - \widehat{\mu}_X) \cdot whitening;note that\mathbf{X}_tis centered prior to the whitening transformation,weightvectors: orthonormal matrix of sizeK \times n.comp,which converts whitened data ton.compforecastable components (ForeCs)\mathbf{F}_t = \mathbf{U}_t \cdot weightvectors,loadings: combination of whitening\timesweightvectors to obtain the finalloadings for the original data:\mathbf{F}_t = (\mathbf{X}_t - \widehat{\mu}_X) \cdot whitening \cdotweightvectors; again, it centers\mathbf{X}_tfirst,loadings.normalized: normalized loadings (unit norm). Notethough that if you use these normalized loadings the resultingsignals do not have variance 1 anymore.scores:n.compforecastable components\mathbf{F}_t.They have mean 0, variance 1, and are uncorrelated.Omega: forecastability score of each ForeC of\mathbf{F}_t.
ForeCs are ordered from most to least forecastable (according toOmega).
Warning
Estimating Omega directly from the ForeCs\mathbf{F}_t can be differentto the reported$Omega estimates fromforeca. Here is why:
In theoryf_y(\lambda) of a linear combinationy_t = \mathbf{X}_t \mathbf{w} can be analytically computed fromthe multivariate spectrumf_{\mathbf{X}}(\lambda) by thequadratic formf_y(\lambda) = \mathbf{w}' f_{\mathbf{X}}(\lambda) \mathbf{w} for all\lambda (seespectrum_of_linear_combination).
In practice, however, this identity does not hold always exactly since(often data-driven) control setting for spectrum estimation are not identicalfor the high-dimensional, noisy\mathbf{X}_t and the combined univariate time seriesy_t(which is usually more smooth, less variable). Thus estimating\widehat{f}_y directly fromy_t can give slightly differentestimates to computing it as\mathbf{w}'\widehat{f}_{\mathbf{X}}\mathbf{w}. Consequently alsoOmega estimatescan be different.
In general, these differences are small and have no relevant implicationsfor estimating ForeCs. However, in rare occasions the obtained ForeCs can havesmallerOmega than the maximumOmega across all original series.In such a case users should not re-estimate\Omega from the resultingForeCs\mathbf{F}_t, but access them via$Omega providedby'foreca' output (the univariate estimates are stored in$Omega.univ).
References
Goerg, G. M. (2013). “Forecastable Component Analysis”.Journal of Machine Learning Research (JMLR) W&CP 28 (2): 64-72, 2013.Available athttp://jmlr.org/proceedings/papers/v28/goerg13.html.
Examples
XX <- diff(log(EuStockMarkets)) * 100plot(ts(XX))## Not run: ff <- foreca(XX[,1:4], n.comp = 4, plot = TRUE, spectrum.control=list(method="pspectrum"))ffsummary(ff)plot(ff)## End(Not run)## Not run: PW <- whiten(XX)one.weight.em <- foreca.one_weightvector(U = PW$U, dewhitening = PW$dewhitening, algorithm.control = list(num.starts = 2, type = "EM"), spectrum.control = list(method = "mvspec"))plot(one.weight.em)## End(Not run)## Not run: PW <- whiten(XX)ff <- foreca.multiple_weightvectors(PW$U, n.comp = 2, dewhitening = PW$dewhitening)ffplot(ff$scores)## End(Not run)Plot, summary, and print methods for class 'foreca'
Description
A collection of S3 methods for estimated ForeCA results (class"foreca").
summary.foreca computes summary statistics.
print.foreca prints a human-readable summary in the console.
biplot.foreca shows a biplot of the ForeCA loadings(wrapper aroundbiplot.princomp).
plot.foreca shows biplots, screeplots, and white noise tests.
Usage
## S3 method for class 'foreca'summary(object, lag = 10, alpha = 0.05, ...)## S3 method for class 'foreca'print(x, ...)## S3 method for class 'foreca'biplot(x, ...)## S3 method for class 'foreca'plot(x, lag = 10, alpha = 0.05, ...)Arguments
lag | integer; how many lags to test in |
alpha | significance level for testing white noise in |
... | additional arguments passed to |
x,object | an object of class |
Examples
# see examples in 'foreca'ForeCA EM auxiliary functions
Description
foreca.EM.one_weightvector relies on several auxiliary functions:
foreca.EM.E_step computes the spectral density ofy_t=\mathbf{U}_t \mathbf{w} given the weightvector\mathbf{w} and the normalized spectrum estimatef_{\mathbf{U}}.A wrapper aroundspectrum_of_linear_combination.
foreca.EM.M_step computes the minimizing eigenvector (\rightarrow \widehat{\mathbf{w}}_{i+1}) of the weightedcovariance matrix, where the weights equal the negative logarithm of the spectral density at the current\widehat{\mathbf{w}}_i.
foreca.EM.E_and_M_step is a wrapper aroundforeca.EM.E_stepfollowed byforeca.EM.M_step.
foreca.EM.h evaluates (an upper bound of) the entropy of the spectral density as a functionof\mathbf{w}_i (or\mathbf{w}_{i+1}). This is the objective funcion that should beminimized.
Usage
foreca.EM.E_step(f.U, weightvector)foreca.EM.M_step(f.U, f.current, minimize = TRUE, entropy.control = list())foreca.EM.E_and_M_step( weightvector, f.U, minimize = TRUE, entropy.control = list())foreca.EM.h( weightvector.new, f.U, weightvector.current = weightvector.new, f.current = NULL, entropy.control = list(), return.negative = FALSE)Arguments
f.U | multivariate spectrum of class |
weightvector | numeric; weights |
f.current | numeric; spectral density estimate of |
minimize | logical; if |
entropy.control | list; control settings for entropy estimation.See |
weightvector.new | weightvector |
weightvector.current | weightvector |
return.negative | logical; if |
Value
foreca.EM.E_step returns the normalized univariate spectral density (normalized such that itssum equals0.5).
foreca.EM.M_step returns a list with three elements:
matrix: weighted covariance matrix, where the weights are the negative log of the spectral density. If density is estimated by discrete probabilities, then thismatrixis positive semi-definite, since-\log(p) \geq 0forp \in [0, 1]. Seeweightvector2entropy_wcov.vector: minimizing (or maximizing ifminimize = FALSE) eigenvector ofmatrix,value: corresponding eigenvalue.
Contrary toforeca.EM.M_step,foreca.EM.E_and_M_step only returns the optimal weightvector as a numeric.
foreca.EM.h returns non-negative real value (see References for details):
entropy, if
weightvector.new = weightvector.current,an upper bound of that entropy for
weightvector.new,otherwise.
See Also
Examples
## Not run: XX <- diff(log(EuStockMarkets)) * 100UU <- whiten(XX)$Uff <- mvspectrum(UU, 'mvspec', normalize = TRUE)ww0 <- initialize_weightvector(num.series = ncol(XX), method = 'rnorm')f.ww0 <- foreca.EM.E_step(ff, ww0)plot(f.ww0, type = "l")## End(Not run)## Not run: one.step <- foreca.EM.M_step(ff, f.ww0, entropy.control = list(prior.weight = 0.1))image(one.step$matrix)requireNamespace(LICORS)# if you have the 'LICORS' package useLICORS::image2(one.step$matrix)ww1 <- one.step$vectorf.ww1 <- foreca.EM.E_step(ff, ww1)layout(matrix(1:2, ncol = 2))matplot(seq(0, pi, length = length(f.ww0)), cbind(f.ww0, f.ww1), type = "l", lwd =2, xlab = "omega_j", ylab = "f(omega_j)")plot(f.ww0, f.ww1, pch = ".", cex = 3, xlab = "iteration 0", ylab = "iteration 1", main = "Spectral density")abline(0, 1, col = 'blue', lty = 2, lwd = 2)Omega(mvspectrum.output = f.ww0) # startOmega(mvspectrum.output = f.ww1) # improved after one iteration## End(Not run)## Not run: ww0 <- initialize_weightvector(NULL, ff, method = "rnorm")ww1 <- foreca.EM.E_and_M_step(ww0, ff)ww0ww1barplot(rbind(ww0, ww1), beside = TRUE)abline(h = 0, col = "blue", lty = 2)## End(Not run)## Not run: foreca.EM.h(ww0, ff) # iteration 0foreca.EM.h(ww1, ff, ww0) # min eigenvalue inequalityforeca.EM.h(ww1, ff) # KL divergence inequalityone.step$value# by definition of Omega, they should equal 1 (modulo rounding errors)Omega(mvspectrum.output = f.ww0) / 100 + foreca.EM.h(ww0, ff)Omega(mvspectrum.output = f.ww1) / 100 + foreca.EM.h(ww1, ff)## End(Not run)EM-like algorithm to estimate optimal ForeCA transformation
Description
foreca.EM.one_weightvector finds the optimal weightvector\mathbf{w}^*that gives the most forecastable signaly_t^* = \mathbf{U}_t \mathbf{w}^*using an EM-like algorithm (see References).
Usage
foreca.EM.one_weightvector( U, f.U = NULL, spectrum.control = list(), entropy.control = list(), algorithm.control = list(), init.weightvector = initialize_weightvector(num.series = ncol(U), method = "rnorm"), ...)Arguments
U | a |
f.U | multivariate spectrum of class |
spectrum.control | list; control settings for spectrum estimation. See |
entropy.control | list; control settings for entropy estimation.See |
algorithm.control | list; control settings for anyiterative ForeCA algorithm. See |
init.weightvector | numeric; starting point |
... | other arguments passed to |
Value
A list with useful quantities like the optimal weighvector, the correspondingsignal, and its forecastability.
See Also
foreca.one_weightvector,foreca.EM-aux
Examples
## Not run: XX <- diff(log(EuStockMarkets)[100:200,]) * 100one.weight <- foreca.EM.one_weightvector(whiten(XX)$U, spectrum.control = list(method = "mvspec"))## End(Not run)Plot, summary, and print methods for class 'foreca.one_weightvector'
Description
S3 methods for the one weightvector optimization in ForeCA (class"foreca.one_weightvector").
summary.foreca.one_weightvector computes summary statistics.
plot.foreca.one_weightvector shows the results of an (iterative) algorithm that obtained the i-th optimal a weightvector\mathbf{w}_i^*. It shows trace plots of the objective function and the weightvector, and a time seriesplot of the transformed signaly_t^* along with its spectral density estimate\widehat{f}_y(\omega_j).
Usage
## S3 method for class 'foreca.one_weightvector'summary(object, lag = 10, alpha = 0.05, ...)## S3 method for class 'foreca.one_weightvector'plot(x, main = "", cex.lab = 1.1, ...)Arguments
lag | integer; how many lags to test in |
alpha | significance level for testing white noise in |
... | |
x,object | an object of class |
main | an overall title for the plot: see |
cex.lab | size of the axes labels. |
Examples
# see examples in 'foreca.one_weightvector'Initialize weightvector for iterative ForeCA algorithms
Description
initialize_weightvector returns a unit norm (in\ell^2)vector\mathbf{w}_0 \in R^K that can be used as the startingpoint for any iterative ForeCA algorithm, e.g.,foreca.EM.one_weightvector. Several quickly computable heuristics are available via themethod argument.
Usage
initialize_weightvector( U = NULL, f.U = NULL, num.series = ncol(U), method = c("rnorm", "max", "SFA", "PCA", "rcauchy", "runif", "SFA.slow", "SFA.fast", "PCA.large", "PCA.small"), seed = sample(1e+06, 1), ...)Arguments
U | a |
f.U | multivariate spectrum of class |
num.series | positive integer; number of time series |
method | string; which heuristics should be used to generate a good starting |
seed | non-negative integer; seed for random initialization which will be returned for reproducibility. By default it sets a random seed. |
... | additional arguments |
Details
Themethod argument specifies the heuristics that is used to get a goodstarting vector\mathbf{w}_0:
"max"vector with all0s, but a1at the positionof the maximum forecastable series inU."rcauchy"random start usingrcauchy(k)."rnorm"random start usingrnorm(k, 0, 1)."runif"random start usingrunif(k, -1, 1)."PCA.large"first eigenvector of PCA (largest variance signal)."PCA.small"last eigenvector of PCA (smallest variance signal)."PCA"checks both small and large, and chooses the one with higherforecastability as computed byOmega.."SFA.fast"last eigenvector of SFA (fastest signal)."SFA.slow"first eigenvector of SFA (slowest signal)."SFA"checks both slow and fast, and chooses the one with higherforecastability as computed byOmega.
Each vector has length K and is automatically normalized to have unit norm in\ell^2.
For the'SFA*' methods seesfa. Note that maximizing (or minimizing) the lag1 auto-correlation does not necessarily yield the most forecastable signal, but it's a good start.
Value
numeric; a vector of lengthK with unit norm in\ell^2.
Examples
XX <- diff(log(EuStockMarkets))## Not run: initialize_weightvector(U = XX, method = "SFA")## End(Not run)initialize_weightvector(num.series = ncol(XX), method = "rnorm")Estimates spectrum of multivariate time series
Description
The spectrum of a multivariate time series is a matrix-valued function of the frequency\lambda \in [-\pi, \pi], which is symmetric/Hermitian around\lambda = 0.
mvspectrum estimates it and returns a 3D array of dimensionnum.freqs \times K \times K. Since the spectrum is symmetric/Hermitian around\lambda = 0 it is sufficient to store only positive frequencies. In the implementation in this package we thus usually consider only positive frequencies (omitting0);num.freqs refersto the number of positive frequencies only.
normalize_mvspectrum normalizes the spectrum such thatit adds up to0.5 over all positive frequencies (by symmetry it will add up to 1 over the whole range – thus the namenormalize).
For aK-dimensional time series it addsup to a HermitianK \times K matrix with 0.5 in the diagonal andimaginary elements (real parts equal to0) in the off-diagonal. Since it is Hermitian the mvspectrum will add up to the identity matrixover the whole range of frequencies, since the off-diagonal elementsare purely imaginary (real part equals 0) and thus add up to 0.
check_mvspectrum_normalized checks if the spectrum is normalized (seenormalize_mvspectrum for the requirements).
mvpgram computes the multivariate periodogram estimate usingbare-bone multivariate fft (mvfft). Usemvspectrum(..., method = 'pgram') instead ofmvpgram directly.
This function is merely included to have one method that does notrequire theastsa nor thesapa R packages. However, it is strongly encouraged to install either one of them to get (much)better estimates. See Details.
get_spectrum_from_mvspectrum extracts the spectrum of one time series from an"mvspectrum" object by taking the i-th diagonal entry for each frequency.
spectrum_of_linear_combination computes the spectrum of the linearcombination\mathbf{y}_t = \mathbf{X}_t \boldsymbol \beta ofK time series\mathbf{X}_t. This can be efficiently computed by the quadratic form
f_{y}(\lambda) = \boldsymbol \beta' f_{\mathbf{X}}(\lambda) \boldsymbol \beta \geq 0,
for each\lambda. This holds for any\boldsymbol \beta (even\boldsymbol \beta = \boldsymbol 0 – not only for||\boldsymbol \beta ||_2 = 1.For\boldsymbol \beta = \boldsymbol e_i (the i-th basis vector) this is equivalent toget_spectrum_from_mvspectrum(..., which = i).
Usage
mvspectrum( series, method = c("mvspec", "pgram", "pspectrum", "ar"), normalize = FALSE, smoothing = FALSE, ...)normalize_mvspectrum(mvspectrum.output)check_mvspectrum_normalized(f.U, check.attribute.only = TRUE)mvpgram(series)get_spectrum_from_mvspectrum( mvspectrum.output, which = seq_len(dim(mvspectrum.output)[2]))spectrum_of_linear_combination(mvspectrum.output, beta)Arguments
series | a |
method | string; method for spectrum estimation: use |
normalize | logical; if |
smoothing | logical; if |
... | additional arguments passed to |
mvspectrum.output | an object of class |
f.U | multivariate spectrum of class |
check.attribute.only | logical; if |
which | integer(s); the spectrum of which series whould be extracted. By default,it returns all univariate spectra as a matrix (frequencies in rows). |
beta | numeric; vector |
Details
For an orthonormal time series\mathbf{U}_t the raw periodogram adds up toI_K over all (negative and positive) frequencies. Since we only considerpositive frequencies, the normalized multivariate spectrum should add up to0.5 \cdot I_K plus a Hermitian imaginary matrix (which will add up to zerowhen combined with its symmetric counterpart.)As we often use non-parametric smoothing for less variance, the spectrum estimatesdo not satisfy this identity exactly.normalize_mvspectrum thus adjust the estimates so they satisfy it again exactly.
mvpgram has no options for improving spectrum estimation whatsoever.It thus yields very noisy (in fact, inconsistent) estimates of the multivariate spectrumf_{\mathbf{X}}(\lambda). If you want to obtain better estimates then please use othermethods inmvspectrum (this is highly recommended to obtain more reasonable/stable estimates).
Value
mvspectrum returns a 3D array of dimensionnum.freqs \times K \times K, where
num.freqs is the number of frequencies
K is the number of series (columns in
series).
It also has an"normalized" attribute, which isFALSE ifnormalize = FALSE; otherwiseTRUE.Seenormalize_mvspectrum for details.
normalize_mvspectrum returns a normalized spectrum over positive frequencies, which:
- univariate:
adds up to
0.5,- multivariate:
adds up to Hermitian
K \times Kmatrixwith 0.5 in the diagonal and purely imaginary elements in the off-diagonal.
check_mvspectrum_normalized throws an error if spectrum is notnormalized correctly.
get_spectrum_from_mvspectrum returns either a matrix of all univariate spectra,or one single column (ifwhich is specified.)
spectrum_of_linear_combination returns a vector with length equal to the number of rows ofmvspectrum.output.
References
See References inspectrum,pspectrum,mvspec.
Examples
set.seed(1)XX <- cbind(rnorm(100), arima.sim(n = 100, list(ar = 0.9)))ss3d <- mvspectrum(XX)dim(ss3d)ss3d[2,,] # at omega_1; in general complex-valued, but Hermitianidentical(ss3d[2,,], Conj(t(ss3d[2,,]))) # is Hermitian## Not run: ss <- mvspectrum(XX[, 1], method="pspectrum", smoothing = TRUE) mvspectrum(XX, normalize = TRUE)## End(Not run)ss <- mvspectrum(whiten(XX)$U, normalize = TRUE)xx <- scale(rnorm(100), center = TRUE, scale = FALSE)var(xx)sum(mvspectrum(xx, normalize = FALSE, method = "pgram")) * 2sum(mvspectrum(xx, normalize = FALSE, method = "mvspec")) * 2## Not run: sum(mvspectrum(xx, normalize = FALSE, method = "pspectrum")) * 2## End(Not run)## Not run: xx <- scale(rnorm(100), center = TRUE, scale = FALSE)ss <- mvspectrum(xx)ss.n <- normalize_mvspectrum(ss)sum(ss.n)# multivariateUU <- whiten(matrix(rnorm(40), ncol = 2))$US.U <- mvspectrum(UU, method = "mvspec")mvspectrum2wcov(normalize_mvspectrum(S.U))## End(Not run)XX <- matrix(rnorm(1000), ncol = 2)SS <- mvspectrum(XX, "mvspec")ss1 <- mvspectrum(XX[, 1], "mvspec")SS.1 <- get_spectrum_from_mvspectrum(SS, 1)plot.default(ss1, SS.1)abline(0, 1, lty = 2, col = "blue")XX <- matrix(arima.sim(n = 1000, list(ar = 0.9)), ncol = 4)beta.tmp <- rbind(1, -1, 2, 0)yy <- XX %*% beta.tmpSS <- mvspectrum(XX, "mvspec")ss.yy.comb <- spectrum_of_linear_combination(SS, beta.tmp)ss.yy <- mvspectrum(yy, "mvspec")plot(ss.yy, log = TRUE) # using plot.mvspectrum()lines(ss.yy.comb, col = "red", lty = 1, lwd = 2)S3 methods for class 'mvspectrum'
Description
S3 methods for multivariate spectrum estimation.
plot.mvspectrum plots all univariate spectra. Analogouos tospectrum whenplot = TRUE.
Usage
## S3 method for class 'mvspectrum'plot(x, log = TRUE, ...)Arguments
x | an object of class |
log | logical; if |
... | additional arguments passed to |
See Also
Examples
# see examples in 'mvspectrum'SS <- mvspectrum(diff(log(EuStockMarkets)) * 100, spectrum.control = list(method = "mvspec"))plot(SS, log = FALSE)Compute (weighted) covariance matrix from frequency spectrum
Description
mvspectrum2wcov computes a (weighted) covariance matrix estimate from the frequency spectrum (see Details).
weightvector2entropy_wcov computes the weighted covariancematrix using the negative entropy of the univariate spectrum (given theweightvector) as kernel weights. This matrix is the objective matrixfor manyforeca.* algorithms.
Usage
mvspectrum2wcov(mvspectrum.output, kernel.weights = 1)weightvector2entropy_wcov( weightvector = NULL, f.U, f.current = NULL, entropy.control = list())Arguments
mvspectrum.output | an object of class |
kernel.weights | numeric; weights for each frequency. By default uses weights that average out to |
weightvector | numeric; weights |
f.U | multivariate spectrum of class |
f.current | numeric; spectral density estimate of |
entropy.control | list; control settings for entropy estimation.See |
Details
The covariance matrix of a multivariate time series satisfies the identity
\Sigma_{X} \equiv \int_{-\pi}^{\pi} S_{X}(\lambda) d \lambda.
A generalized covariance matrix estimate can thus be obtained using a weighted average
\tilde{\Sigma}_X = \int_{-\pi}^{\pi} K(\lambda) S_{X}(\lambda) d \lambda,
whereK(\lambda) is a kernel symmetric around0 which averages out to1 over the interval[-\pi, \pi], i.e.,\frac{1}{2 \pi} \int_{-\pi}^{\pi} K(\lambda) d \lambda = 1. This allows one to remove or amplify specific frequencies in the covariance matrix estimation.
For ForeCAmvspectrum2wcov is especially important as we use
K(\lambda) = -\log f_y(\lambda),
as theweights (their average is not1!). This particular kernelweight is implemented as a wrapper inweightvector2entropy_wcov.
Value
A symmetricn \times n matrix.
Ifkernel.weights\geq 0, then it is positive semi-definite;otherwise, it is symmetric but not necessarily positive semi-definite.
See Also
Examples
nn <- 50YY <- cbind(rnorm(nn), arima.sim(n = nn, list(ar = 0.9)), rnorm(nn))XX <- YY %*% matrix(rnorm(9), ncol = 3) # random mixXX <- scale(XX, scale = FALSE, center = TRUE)# sample estimate of covariance matrixSigma.hat <- cov(XX)dimnames(Sigma.hat) <- NULL# using the frequency spectrumSS <- mvspectrum(XX, "mvspec")Sigma.hat.freq <- mvspectrum2wcov(SS)layout(matrix(1:4, ncol = 2))par(mar = c(2, 2, 1, 1))plot(c(Sigma.hat/Sigma.hat.freq))abline(h = 1)image(Sigma.hat)image(Sigma.hat.freq)image(Sigma.hat / Sigma.hat.freq)# examples for entropy wcovXX <- diff(log(EuStockMarkets)) * 100UU <- whiten(XX)$Uff <- mvspectrum(UU, "mvspec", normalize = TRUE)ww0 <- initialize_weightvector(num.series = ncol(XX), method = 'rnorm')weightvector2entropy_wcov(ww0, ff, entropy.control = list(prior.weight = 0.1))Computes quadratic form x' A x
Description
quadratic_form computes the quadratic form\mathbf{x}' \mathbf{A} \mathbf{x} for ann \times n matrix\mathbf{A} and ann-dimensional vector\mathbf{x}, i.e., a wrapper fort(x) %*% A %*% x.
fill_symmetric andquadratic_form work with real and complex valued matrices/vectors.
fill_hermitian fills up the lower triangular part (NA)of an upper triangular matrix to itsHermitian (symmetric if real matrix) version, such that it satisfies\mathbf{A} = \bar{\mathbf{A}}', where\bar{z} is the complexconjugate ofz. If the matrix is real-valued this makes it simply symmetric.
Note that the input matrix must have areal-valued diagonal andNAs in the lower triangular part.
Usage
quadratic_form(mat, vec)fill_hermitian(mat)Arguments
mat | numeric; |
vec | numeric; |
Value
A real/complex value\mathbf{x}' \mathbf{A} \mathbf{x}.
Examples
## Not run: set.seed(1) AA <- matrix(1:4, ncol = 2) bb <- matrix(rnorm(2)) t(bb) %*% AA %*% bb quadratic_form(AA, bb)## End(Not run)AA <- matrix(1:16, ncol = 4)AA[lower.tri(AA)] <- NAAAfill_hermitian(AA)Slow Feature Analysis
Description
sfa performs Slow Feature Analysis (SFA) on aK-dimensional time series withT observations.
Important: This implementation of SFA is just the most basicversion; it is merely included here for convenience ininitialize_weightvector. If you want to actually use full functionality of SFA in R use therSFA package, which has a much more advanced and efficient implementations.sfa() here corresponds tosfa1.
Usage
sfa(series, ...)Arguments
series | a |
... | additional arguments |
Details
Slow Feature Analysis (SFA) findsslow signals (see References below). The problem has ananalytic solution and thus can be computed quickly using generalized eigen-value solvers.For ForeCA it is important to know that SFA is equivalent tofinding a linear combination signal with largest lag1 autocorrelation.
The disadvantage of SFA for forecasting is that, e.g., white noise (WN) is ranked higher than an AR(1) with negative autocorrelation coefficient\rho_1 < 0. While it is true that WN is slower, it is not more forecastable. Thus we are also interested in the fastest signal, i.e.,the last eigenvector. The so obtained fastest signal corresponds to minimizingthe lag 1 auto-correlation (possibly\rho_1 < 0).
Note though that maximizing (or minimizing) the lag1 auto-correlation does not necessarily yield the most forecastable signal (as measured byOmega), but it is a good start.
Value
An object of classsfa which inherits methods fromprincomp.Signals are ordered from slowest to fastest.
References
Laurenz Wiskott and Terrence J. Sejnowski (2002). “Slow Feature Analysis: Unsupervised Learning of Invariances”, Neural Computation 14:4, 715-770.
See Also
Examples
XX <- diff(log(EuStockMarkets[-c(1:100),])) * 100plot(ts(XX))ss <- sfa(XX[,1:4])summary(ss)plot(ss)plot(ts(ss$scores))apply(ss$scores, 2, function(x) acf(x, plot = FALSE)$acf[2])biplot(ss)Estimates spectral entropy of a time series
Description
Estimatesspectral entropy from a univariate (or multivariate) normalized spectral density.
Usage
spectral_entropy( series = NULL, spectrum.control = list(), entropy.control = list(), mvspectrum.output = NULL, ...)Arguments
series | univariate time series of length |
spectrum.control | list; control settings for spectrum estimation. See |
entropy.control | list; control settings for entropy estimation.See |
mvspectrum.output | optional; one can directly provide an estimate of the spectrum of |
... | additional arguments passed to |
Details
Thespectral entropy equals the Shannon entropy of the spectral densityf_x(\lambda) of a stationary processx_t:
H_s(x_t) = - \int_{-\pi}^{\pi} f_x(\lambda) \log f_x(\lambda) d \lambda,
where the density is normalized such that\int_{-\pi}^{\pi} f_x(\lambda) d \lambda = 1. An estimate off(\lambda)can be obtained by the (smoothed) periodogram (seemvspectrum); thus using discrete, and not continuous entropy.
Value
A non-negative real value for the spectral entropyH_s(x_t).
References
Jerry D. Gibson and Jaewoo Jung (2006). “TheInterpretation of Spectral Entropy Based Upon Rate Distortion Functions”.IEEE International Symposium on Information Theory, pp. 277-281.
L. L. Campbell, “Minimum coefficient rate for stationary random processes”, Information and Control, vol. 3, no. 4, pp. 360 - 371, 1960.
See Also
Examples
set.seed(1)eps <- rnorm(100)spectral_entropy(eps)phi.v <- seq(-0.95, 0.95, by = 0.1)kMethods <- c("mvspec", "pgram")SE <- matrix(NA, ncol = length(kMethods), nrow = length(phi.v))for (ii in seq_along(phi.v)) { xx.tmp <- arima.sim(n = 200, list(ar = phi.v[ii])) for (mm in seq_along(kMethods)) { SE[ii, mm] <- spectral_entropy(xx.tmp, spectrum.control = list(method = kMethods[mm])) }}matplot(phi.v, SE, type = "l", col = seq_along(kMethods))legend("bottom", kMethods, lty = seq_along(kMethods), col = seq_along(kMethods)) # AR vs MASE.arma <- matrix(NA, ncol = 2, nrow = length(phi.v))SE.arma[, 1] <- SE[, 2]for (ii in seq_along(phi.v)){ yy.temp <- arima.sim(n = 1000, list(ma = phi.v[ii])) SE.arma[ii, 2] <- spectral_entropy(yy.temp, spectrum.control = list(method = "mvspec"))}matplot(phi.v, SE.arma, type = "l", col = 1:2, xlab = "parameter (phi or theta)", ylab = "Spectral entropy")abline(v = 0, col = "blue", lty = 3)legend("bottom", c("AR(1)", "MA(1)"), lty = 1:2, col = 1:2)whitens multivariate data
Description
whiten transforms a multivariate K-dimensional signal\mathbf{X} with mean\boldsymbol \mu_X and covariance matrix\Sigma_{X} to awhitenedsignal\mathbf{U} with mean\boldsymbol 0 and\Sigma_U = I_K.Thus it centers the signal and makes it contemporaneously uncorrelated.See Details.
check_whitened checks if data has been whitened; i.e., if it haszero mean, unit variance, and is uncorrelated.
sqrt_matrix computes the square root\mathbf{B} of a square matrix\mathbf{A}. The matrix\mathbf{B} satisfies\mathbf{B} \mathbf{B} = \mathbf{A}.
Usage
whiten(data)check_whitened(data, check.attribute.only = TRUE)sqrt_matrix(mat, return.sqrt.only = TRUE, symmetric = FALSE)Arguments
data |
|
check.attribute.only | logical; if |
mat | a square |
return.sqrt.only | logical; if |
symmetric | logical; if |
Details
whiten uses zero component analysis (ZCA) (aka zero-phase whitening filters)to whiten the data; i.e., it uses theinverse square root of the covariance matrix of\mathbf{X} (seesqrt_matrix) as the whitening transformation.This means that on top of PCA, the uncorrelated principal components areback-transformed to the original space using thetranspose of the eigenvectors. The advantage is that this makes them comparableto the original\mathbf{X}. See References for details.
Thesquare root of a quadraticn \times n matrix\mathbf{A}can be computed by using the eigen-decomposition of\mathbf{A}
\mathbf{A} = \mathbf{V} \Lambda \mathbf{V}',
where\Lambda is ann \times n matrix with the eigenvalues\lambda_1, \ldots, \lambda_n in the diagonal.The square root is simply\mathbf{B} = \mathbf{V} \Lambda^{1/2} \mathbf{V}' where\Lambda^{1/2} = diag(\lambda_1^{1/2}, \ldots, \lambda_n^{1/2}).
Similarly, theinverse square root is defined as\mathbf{A}^{-1/2} = \mathbf{V} \Lambda^{-1/2} \mathbf{V}', where\Lambda^{-1/2} = diag(\lambda_1^{-1/2}, \ldots, \lambda_n^{-1/2})(provided that\lambda_i \neq 0).
Value
whiten returns a list with the whitened data, the transformation,and other useful quantities.
check_whitened throws an error if the input is notwhitened, and returns (invisibly) the data with an attribute'whitened'equal toTRUE. This allows to simply update data to have theattribute and thus only check it once on the actual data (slow) but thenuse the attribute lookup (fast).
sqrt_matrix returns ann \times n matrix. If\mathbf{A}is not semi-positive definite it returns a complex-valued\mathbf{B}(since square root of negative eigenvalues are complex).
Ifreturn.sqrt.only = FALSE then it returns a list with:
values | eigenvalues of |
vectors | eigenvectors of |
sqrt | square root matrix |
sqrt.inverse | inverse of |
References
See appendix inhttp://www.cs.toronto.edu/~kriz/learning-features-2009-TR.pdf.
Seehttp://ufldl.stanford.edu/wiki/index.php/Implementing_PCA/Whitening.
Examples
## Not run: XX <- matrix(rnorm(100), ncol = 2) %*% matrix(runif(4), ncol = 2)cov(XX)UU <- whiten(XX)$Ucov(UU)## End(Not run)