Optimally damped rank-reduction method

In this section, we will derive a damping operator for the optimally estimated signal components to further reduce the residual noise after rank reduction. Let

$\displaystyle \mathbf{Q} = [\mathbf{U}_1^{Q}\quad \mathbf{U}_2^{Q}]\left[\begin...
...egin{array}{c}
(\mathbf{V}_1^{Q})^H\\
(\mathbf{V}_2^{Q})^H
\end{array}\right].$ (23)

to be an SVD of matrix $\mathbf{Q}$, and let

$\displaystyle \mathbf{U}_1^{Q} = \mathbf{U}_1^{M},$ (24)
$\displaystyle \boldsymbol{\Sigma}_1^{Q} = \hat{\mathbf{W}}\boldsymbol{\Sigma}_1^{M},$ (25)
$\displaystyle \mathbf{V}_1^{Q} = \mathbf{V}_1^{M},$ (26)

then equation 22 can be understood as a TSVD of the matrix $Q$:

$\displaystyle \tilde{\mathbf{Q}}=\mathbf{U}_1^{Q}\boldsymbol{\Sigma}_1^{Q}{\mathbf{V}_1^{Q}}^H.$ (27)

Analogous to equation 14, equation 22 can be re-formulated as

$\displaystyle \tilde{\mathbf{Q}}=\hat{\mathbf{M}}=\tilde{\mathbf{S}}+\mathbf{U}_1^{\tilde{\mathbf{S}}}{\mathbf{U}_1^{\tilde{\mathbf{S}}}}^H\tilde{\mathbf{N}},$ (28)

When the rank is sufficiently large, we assume the estimated signal contains all signal components of the originally observed noisy data and contains less noise than the observed data. To further suppress the residual noise in the estimated signal, we re-analyze $\hat{\mathbf{M}}$ in detail. We can express the newly estimated signal as:

$\displaystyle \tilde{\mathbf{Q}}=\hat{\mathbf{M}}=\mathbf{S}+\mathbf{U}_1^{S}{\mathbf{U}_1^{S}}^H\tilde{\mathbf{N}},$ (29)

where $\mathbf{U}_1^{S}{\mathbf{U}_1^{S}}^H\tilde{\mathbf{N}}$ denotes the residual noise component after the step using equation 22.

Following Chen et al. (2016c) and Huang et al. (2016), $\mathbf{S}$ can be approximated as:

$\displaystyle \mathbf{S}$ $\displaystyle = \mathbf{U}_1^{Q}\boldsymbol{\Sigma}_1^{Q}\mathbf{T}\left(\mathbf{V}_1^{Q}\right)^H,$ (30)
$\displaystyle \mathbf{T}$ $\displaystyle = \mathbf{I}-\boldsymbol{\Gamma},$ (31)

where $\mathbf{I}$ is a unit matrix and here we name $\mathbf{T}$ the damping operator. The damping relation $\boldsymbol{\Gamma}$ is expressed as:

$\displaystyle \boldsymbol{\Gamma} \approx \hat{\delta}^N\left(\boldsymbol{\Sigma}_1^{Q}\right)^{-N},$ (32)

where $\hat{\delta}$ denotes the maximum element of $\boldsymbol{\Sigma}_2^{Q}$ and $N$ denotes the damping factor.

Considering that $\mathbf{U}_1^{Q} = \mathbf{U}_1^{M}$, $\boldsymbol{\Sigma}_1^{Q}= \hat{\mathbf{W}}\boldsymbol{\Sigma}_1^{M}$, and $\mathbf{V}_1^{Q} = \mathbf{V}_1^{M}$, equation 30 can be expressed as:

$\displaystyle \mathbf{S} = \mathbf{U}_1^{M} \mathbf{T} \hat{\mathbf{W}}\boldsymbol{\Sigma}_1^{M} \left(\mathbf{V}_1^{M}\right)^H,$ (33)

which is referred to as the optimally damped rank reduction method. The $\mathcal{F}$ can thus be chosen as the operation defined in equation 33, and through iterations we can reconstruct and denoise the 5D frequency-domain seismic data.

There are two main advantages of the optimally damped rank reduction method. First, compared with the traditional and damped rank-reduction methods, the optimally damped rank reduction method is insensitive to the rank parameter, making it nearly an adaptive method for rank reduction based seismic denoising and reconstruction. This advantage is important because one of the most troublesome problems in processing complicated seismic data is the selection of the rank. A large rank tends to result in significant residual noise while a small rank tends to damage useful signals. This parameterization problem becomes more seriously when the rank reduction method is applied locally in windows. Because of the insensitivity to rank of the proposed method, one can choose a relatively large rank for all complicated datasets or local patches. Secondly, compared with the optimal weighting based rank reduction method, the proposed method can further suppress the noise components that reside mostly in the smaller singular values. The damping operator is data-driven and can adaptively separate signal and noise in the singular value spectrum further after the optimal weighting.

In the rank-reduction methods, construction of the level-four Hankel matrix is very computationally expensive. Recent advances in the rank reduction based methods show that the construction of the block Hankel matrix is not required. These methods exploit the structure of such matrices to avoid explicitly forming these matrices prior to factorization (Cheng and Sacchi, 2016; Lu et al., 2015). When factorizing data using only 1 or 2 spatial dimensions these approaches are not necessary, but moving to 3 or 4 spatial dimensions is not computationally feasible without considering matrix free approaches. In the current stage, we cannot move from the SVD-based method to SVD-free method because the damping operation has not been derived for the SVD-free case. Although it takes a large computational cost and is not very practically for the time being, it is still a promising algorithm. We will keep on investigating the acceleration of the current algorithm and make it computationally feasible in the future.


2020-12-06