Is VAR model fitting invariant to PCA transformations?¶
MVARICA usually applies a PCA transform to the EEG prior to VAR model fitting. This is intended as a dimensionality reduction step; PCA components that contribute little to total EEG variance are removed. However, the PCA produces orthogonal components. In other words, PCA transformed signals are uncorrelated.
The question was raised whether it is possible to reconstruct (fit) a VAR model from PCA transformed signals. Here we show that this is, in fact, the case.
We will denote a var model with coefficients \(\mathbf{C}\) and innovation process \(\vec{\epsilon}\) as \(\mathrm{VAR}(\mathbf{C},\vec{\epsilon})\).
Let’s start with a VAR process \(\vec{x}_n = \mathrm{VAR}(\mathbf{A},\vec{e})\). If the model contains causal structure, elements of \(\vec{x}\) will in most cases show some degree of correlation. Let \(\vec{y}_n = \mathbf{W} \vec{x}_n\) be the PCA transformed signal. Furthermore, assume that \(\vec{y}\) is a VAR process too: \(\vec{y}_n = \mathrm{VAR}(\mathbf{B},\vec{r})\).
In order to reconstruct the original VAR model \(\mathrm{VAR}(\mathbf{A},\vec{e})\) from \(\mathrm{VAR}(\mathbf{B},\vec{r})\) the following requirements need to be met:
\(\mathrm{VAR}(\mathbf{B},\vec{r})\) can be transformed into \(\mathrm{VAR}(\mathbf{A},\vec{e})\) when the PCA transform \(\mathbf{W}\) is known.
A VAR model can have zero cross-correlation despite having causal structure.
The first requirement is obvious. Only when the models can be transformed into each other it is possible to reconstruct one model from another. Since the PCA transformation \(\mathbf{W}\) is a rotation matrix, its inverse is the transpose \(\mathbf{W}^{-1} = \mathbf{W}^\intercal\). In section Linear transformation of a VAR model we show that transformation of VAR models is possible if \(\mathbf{S} \mathbf{R} = \mathbf{I}\) and \(\mathbf{R} \mathbf{S} = \mathbf{I}\). This is the case with PCA since \(\mathbf{W}^\intercal \mathbf{W} = \mathbf{W} \mathbf{W}^\intercal = \mathbf{I}\).
The second requirement relates to the fact that in order to reconstruct model A from model B all information about A must be present in B. Thus, information about the causal structure of A must be preserved in B, although \(\vec{y}_n = \mathrm{VAR}(\mathbf{B},\vec{r})\) is uncorrelated. Covariance of a bivariate AR(1) process shows that it is possible to construct models where causal structure cancels cross-correlation.
In conclusion, it is possible to fit VAR models on PCA transformed signals and reconstruct the original model.
Linear transformation of a VAR model¶
We start with a two VAR models; one for each vector signal \(\vec{x}\) and \(\vec{y}\):
Now assume that \(\vec{x}\) and \(\vec{y}\) can be transformed into each other by linear transformations:
Note that
By substituting the transformations into the VAR model equations we obtain
Thus, each model can be transformed into the other by
Conclusion: We can equivalently formulate VAR models for vector signals, if these signals are related by linear transformations that satisfy \(\mathbf{S} \mathbf{R} = \mathbf{I}\) and \(\mathbf{R} \mathbf{S} = \mathbf{I}\).
Covariance of a bivariate AR(1) process¶
Consider the bivariate AR(1) process given by
where \(e_1\) and \(e_2\) are uncorrelated Gaussian white noise processes with zero mean and unit variance.
The process variances \(s_i^2\) and covariance \(r\) are obtained by solving the following system of equations [1]:
In general, a VAR model with causal structure (\(a_{12} \neq 0\) and/or \(a_{21} \neq 0\)) has some instantaneous correlation (non-zero covariance \(r \neq 0\)) between signals.
Now, let’s constrain the system to zero covariance \(r = 0\).
Conclusion: it is possible to construct special cases where VAR processes with causal structure have no instantaneous correlation.