next up previous [pdf]

Next: Seislet transform based DSD Up: Theory Previous: Learning DSD in data

Solving the analysis model by data-driven approach

In order to perform optimization in equations 4 and 5, we can employ the data-driven approach that was suggested previously by Cai et al. (2013). As an example, we only introduce the solver for equation 5.

The minimization can be solved by updating coefficients of vector $ \mathbf{m}$ and the learning-based dictionary $ \mathbf{W}$ alternately. We adopt the following algorithm for solving problem 5:
Input: Base dictionary $ \mathbf{B}$ , initial learning-based dictionary $ \mathbf{W}^{(0)}$ .

  1. Transform data from data domain to model domain according to

    $\displaystyle \mathbf{b} = \mathbf{B}^{-1} \mathbf{d}\;.$ (6)

  2. for $ k=0,1,\cdots,K-1$ :
    1. Fix the learning-based dictionary $ \mathbf{W}^{(k)}$ , estimate the double-sparsity coefficients vector $ \mathbf{m}^{(k)}$ by

      $\displaystyle \mathbf{m}^{(k)}=\arg\min_{\mathbf{m}} \frac{1}{2}\parallel \left...
... \mathbf{b}-\mathbf{m} \parallel_2^2 + \lambda\parallel \mathbf{m} \parallel_0.$ (7)

    2. Given the double-sparsity coefficients vector $ \mathbf{m}^{(k)}$ , update the learning-based dictionary $ \mathbf{W}^{(k+1)}$ :

      \begin{displaymath}\begin{split}\mathbf{W}^{(k+1)} &= \arg\min_{\mathbf{W}} \fra...
...^2, \ &s.t. \mathbf{W}\mathbf{W}^{T} = \mathbf{I}. \end{split}\end{displaymath} (8)

    end for
After $ K$ iterations, the DSD coefficients are obtained and the observed data become sparsely represented by DSD. It is known that minimization 7 has a unique solution $ \hat{\mathbf{m}}$ provided by applying a hard thresholding operator on the coefficient vector $ \left(\mathbf{W}^{(k)}\right)^T\mathbf{b}$ . The minimization 8 can be implemented by using a SVD-based optimization approach (Cai et al., 2013).

Assuming the size of the coefficients domain is $ N_1\times N_2$ , and $ N=N_1N_2$ , let a filter mapping $ \mathbf{F}_\mathbf{a}:\mathcal{R}^N\rightarrow \mathcal{R}^N $ be the block-wise Toeplitz matrix representing the convolution operator with a finitely supported 2D filter $ \mathbf{a}$ under the Newmann boundary condition. The learning based dictionary $ \mathbf{W}\in \mathcal{R} ^{N\times Np}$ can be defined as

$\displaystyle \mathbf{W} = \left[\mathbf{F}_{\mathbf{a}_1},\mathbf{F}_{\mathbf{a}_2},\cdots,\mathbf{F}_{\mathbf{a}_p} \right].$ (9)

Each $ \mathbf{a}_i$ is a 2D filter associated with a tight frame and the columns of $ \mathbf{W}$ form a tight frame for $ \mathcal{R}^N$ . $ p$ denotes the number of filters. The patch size discussed in the following examples corresponds to the size of each $ \mathbf{a}_i$ .

Liang et al. (2014) give an example of using spline wavelets for the initial $ \left(\mathbf{W}^{(0)}\right)^T$ and the finally learned dictionary $ \left(\mathbf{W}^{(K)}\right)^T$ . Following Liang et al. (2014), we also choose spline wavelets for the initial $ \mathbf{W}$ .


next up previous [pdf]

Next: Seislet transform based DSD Up: Theory Previous: Learning DSD in data

2016-02-27