Multidimensional autoregression |

To get the bottom rows from the top rows, we simply reverse the order of all the components within each row. That reverses the input time function. (Reversing the order within a column would reverse the output time function.) Instead of the matrix being diagonals tipping to the right, they tip to the left. We could make this matrix from our old familiar convolution matrix and a time-reversal matrix

It is interesting to notice how time-reversal symmetry applies to Figure 15. First of all, with time going both forward and backward the residual space gets twice as big. The time-reversal part gives a selector for Figure 15 with a gap along the right edge instead of the left edge. Thus, we have acquired a few new regression equations.

Some of my research codes include these symmetries,
but I excluded them here.
Nowhere did I see that the reversal symmetry made noticeable difference
in results,
but in coding, it makes a noticeable clutter by
expanding the residual to a two-component *residual array*.

Where a data sample grows exponentially towards the boundary, I expect that extrapolated data would diverge too. You can force it to go to zero (or any specified value) at some distance from the body of the known data. To do so, surround the body of data by missing data and surround that by specification of ``enough'' zeros. ``Enough'' is defined by the filter length.

Multidimensional autoregression |

2013-07-26