next up previous [pdf]

Next: The basic low-cut filter Up: FAMILIAR OPERATORS Previous: Causal and leaky integration

Backsolving, polynomial division and deconvolution

Ordinary differential equations often lead us to the backsolving operator. For example, the damped harmonic oscillator leads to a special case of equation (22), where $(a_3,a_4,\cdots)=0$. There is a huge literature on finite-difference solutions of ordinary differential equations that lead to equations of this type. Rather than derive such an equation on the basis of many possible physical arrangements, we can begin from the filter transformation in equation (4), but put the top square of the matrix on the other side of the equation so our transformation can be called one of inversion or backsubstitution. To link up with applications in later chapters, I specialize to put 1s on the main diagonal and insert some bands of zeros.

\bold A \bold y
1 & 0 & ...
...3 \\
x_4 \\
x_5 \\
\end{array} \right]
\bold x
\end{displaymath} (22)

Algebraically, this operator goes under the various names, ``backsolving,'' ``polynomial division,'' and ``deconvolution.'' The leaky integration transformation in equation (19) is a simple example of backsolving when $a_1=-\rho$ and $a_2=a_5=0$. To confirm, you need to verify that the matrices in equations (22) and (19) are mutually inverse.

A typical row in equation (22) says:

x_t \eq y_t  + \sum_{\tau > 0}  a_\tau y_{t-\tau}
\end{displaymath} (23)

Change the signs of all terms in equation (23), and move some terms to the opposite side:
y_t \eq x_t  - \sum_{\tau > 0}  a_\tau y_{t-\tau}
\end{displaymath} (24)

Equation (24) is a recursion to find $y_t$ from the values of $y$ at earlier times.

In the same way that equation (4) can be interpreted as $Y(Z)=B(Z)X(Z)$, equation (22) can be interpreted as $A(Z) Y(Z) = X(Z)$, which amounts to $Y(Z) = X(Z)/A(Z)$. Thus, convolution is amounts to polynomial multiplication while the backsubstitution we are doing here is called ``deconvolution,'' and it amounts to polynomial division.

A causal operator is one that uses its present and past inputs to make its current output. Anticausal operators use the future but not the past. Causal operators are generally associated with lower triangular matrices and positive powers of $Z$; whereas, anticausal operators are associated with upper triangular matrices and negative powers of $Z$. A transformation like equation (22) but with the transposed matrix would require us to run the recursive solution the opposite direction in time, as we did with leaky integration.

A module to backsolve equation (22) is recfilt.

    for (ix=0; ix < nx; ix++) {
	tt[ix] = 0.;

    if (adj) {
	for (ix = nx-1; ix >= 0; ix-) {  
	    tt[ix] = yy[ix];
	    for (ia = 0; ia < SF_MIN(na,ny-ix-1); ia++) {
		iy = ix + ia + 1;
		tt[ix] -= aa[ia] * tt[iy];
	for (ix=0; ix < nx; ix++) {
	    xx[ix] += tt[ix];
    } else {
	for (iy = 0; iy < ny; iy++) { 
	    tt[iy] = xx[iy];
	    for (ia = 0; ia < SF_MIN(na,iy); ia++) {
		ix = iy - ia - 1;
		tt[iy] -= aa[ia] * tt[ix];
	for (iy=0; iy < ny; iy++) {
	    yy[iy] += tt[iy];

We may wonder why the adjoint of $\bold A \bold y = \bold x$ actually is $\bold A\T \hat{\bold x}= \bold y$. With the well-known fact that the inverse of a transpose is the transpose of the inverse we have:

$\displaystyle \bold y$ $\textstyle =$ $\displaystyle \bold A^{-1} \bold x$ (25)
$\displaystyle \hat \bold x$ $\textstyle =$ $\displaystyle (\bold A^{-1})\T \bold y$ (26)
$\displaystyle \hat \bold x$ $\textstyle =$ $\displaystyle (\bold A\T)^{-1} \bold y$ (27)
$\displaystyle \bold A\T \hat \bold x$ $\textstyle =$ $\displaystyle \bold y$ (28)

next up previous [pdf]

Next: The basic low-cut filter Up: FAMILIAR OPERATORS Previous: Causal and leaky integration