next up previous [pdf]

Next: Nonstationary deconvolution Up: Toy examples Previous: Toy examples

Nonstationary line fitting

Figure 1a shows a classic example of linear regression applied as a line fitting problem. When the same technique is applied to data with a non-stationary behavior (Figure 1b), stationary regression fails to produce an accurate fit and creates regions of consistent overprediction and underprediction.

pred pred2
pred,pred2
Figure 1.
Line fitting with stationary regression works well for stationary data (a) but poorly for non-stationary data (b).
[pdf] [pdf] [png] [png] [scons]

One remedy is to extend the model by including nonlinear terms (Figure 2a), another is to break the data into local windows (Figure 2b). Both solutions work to a certain extent but are not completely satisfactory, because they decreases the estimation stability and introduce additional non-intuitive parameters.

pred3 pred4
pred3,pred4
Figure 2.
Nonstationary line fitting using nonlinear terms (a) and local windows (b).
[pdf] [pdf] [png] [png] [scons]

The regularized nonstationary solution, defined in the previous section, is shown in Figure 3. When using shaping regularization with smoothing as the shaping operator, the only additional parameter is the radius of the smoothing operator.

pred5
pred5
Figure 3.
Nonstationary line fitting by regularized nonstationary regression.
[pdf] [png] [scons]

This toy example makes it easy to compare shaping regularization with the more traditional Tikhonov's regularization. Figures 4 and 5 show inverted matrix $\mathbf{A}$ from equation 5 and the distribution of its eigenvalues for two different values of Tikhonov's regularization parameter $\epsilon$, which correspond to mild and strong smoothing constraints. The operator $\mathbf{D}$ in this case is the first-order difference. Correspondingly, Figures 6 and 7 show matrix $\widehat{\mathbf{A}}$ from equation 7 and the distribution of its eigenvalues for mild and moderate smoothing implemented with shaping. The operator $\mathbf{S}$ is Gaussian smoothing controlled by the smoothing radius.

When a matrix operator is inverted by an iterative method such as conjugate gradients, two characteristics control the number of iterations and therefore the cost of inversion (Golub and Van Loan, 1996; van der Vorst, 2003):

  1. the condition number $\kappa$ (the ratio between the largest and the smallest eigenvalue)
  2. the clustering of eigenvalues.
Large condition numbers and poor clustering lead to slow convergence. Figures 4-7 demonstrate that both the condition number and the clustering of eigenvalues are significantly better in the case of shaping regularization than in the case of Tikhonov's regularization. In toy problems, this difference in behavior is not critical, because one can easily invert both matrices exactly. However, this difference becomes important in large-scale applications, where inversion is iterative and saving the number of iterations is crucial for performance.

As the smoothing radius increases, matrix $\widehat{\mathbf{A}}$ approaches the identity matrix, and the result of non-stationary regression regularized by shaping approaches the result of stationary regression. This intuitively pleasing behavior is difficult to emulate with Tikhonov's regularization.

tmat0 teig0
tmat0,teig0
Figure 4.
Matrix inverted in Tikhonov's regularization applied to nonstationary line fitting (a) and the distribution of its eigenvalues (b). The regularization parameter $\epsilon =0.1$ corresponds to mild smoothing. The condition number is $\kappa \approx 888888$.
[pdf] [pdf] [png] [png] [scons]

tmat2 teig2
tmat2,teig2
Figure 5.
Matrix inverted in Tikhonov's regularization applied to nonstationary line fitting (a) and the distribution of its eigenvalues (b). The regularization parameter $\epsilon =10$ corresponds to strong smoothing. The condition number is $\kappa \approx 14073$. Eigenvalues are poorly clustered.
[pdf] [pdf] [png] [png] [scons]

smat0 seig0
smat0,seig0
Figure 6.
Matrix inverted in shaping regularization applied to nonstationary line fitting (a) and the distribution of its eigenvalues (b). The smoothing radius is 3 samples (mild smoothing). The condition number is $\kappa \approx 6055$.
[pdf] [pdf] [png] [png] [scons]

smat1 seig1
smat1,seig1
Figure 7.
Matrix inverted in shaping regularization applied to nonstationary line fitting (a) and the distribution of its eigenvalues (b). The smoothing radius is 15 samples (moderate smoothing). The condition number is $\kappa \approx 206$. Eigenvalues are well clustered.
[pdf] [pdf] [png] [png] [scons]


next up previous [pdf]

Next: Nonstationary deconvolution Up: Toy examples Previous: Toy examples

2013-07-26