next up previous [pdf]

Next: Step 2: Data interpolation Up: Theory Previous: Theory

Step 1: The $ t$ -$ x$ -$ y$ SPF estimation

Linear events with different constant dips can be predicted by a PF or an autoregression operator in the time-space domain, which is calculated to minimize the energy of the prediction error. Consider a 3D $ t$ -$ x$ -$ y$ PF $ a_{i,j,k}$ to predict a given centered sample $ d(t,x,y)$ of data:

$\displaystyle \widehat{d}(t,x,y) = \sum_{i=-L}^{L}\sum_{\substack{j=-M\\ j\ne0}}^{M}\sum_{\substack{k=-N\\ k\ne0}}^{N}a_{i,j,k}(t,x,y)d_{i,j,k}(t,x,y),$ (1)

where $ d_{i,j,k}(t,x,y)$ represents the translation of $ d(t,x,y)$ with time shifts $ i$ and space shifts $ j$ and $ k$ , nonstationary filter coefficients $ a_{i,j,k}(t,x,y)$ change with time and space axes, and $ L$ , $ M$ , and $ N$ control the lengths of the filter along $ t$ , $ x$ , and $ y$ -axes, respectively.

In linear algebra notation, the filter coefficients $ a_{i,j,k}$ are determined by minimizing the underdetermined least-squares problem:

$\displaystyle \widehat{\mathbf{a}}(t,x,y)=\arg\min_{\mathbf{a}(t,x,y)}\parallel d(t,x,y)-\mathbf{d}(t,x,y)^{T}\mathbf{a}(t,x,y)\parallel_{2}^{2},$ (2)

where $ \mathbf{a}(t,x,y)$ represents the vector of filter coefficients and $ \mathbf{d}(t,x,y)$ represents the vector of data translations $ d_{i,j,k}(t,x,y)$ . For nonstationary situations, we can use different regularization term to constrain equation 2, such as global smoothness (Liu and Fomel, 2011). Sacchi and Naghizadeh (2009) introduced a local smoothness constraint to calculate the adaptive prediction filter. Fomel and Claerbout (2016) proposed the same constraint and solved the algebraic problem analytically with streaming computation, which demonstrated the same results as Sacchi and Naghizadeh's method. The local constraint is that the new filter $ \mathbf{a}$ stays close to the prior neighboring filter $ \mathbf{\bar{a}}$ , $ \xi\mathbf{a}\approx\xi\mathbf{\bar{a}}$ , where $ \xi$ is a scale parameter. However, the regularization term occasionally fails in the presence of strong amplitude variation. Thus, we improved the constraint with varying smoothness. The SPF in the $ t$ -$ x$ -$ y$ domain was found by solving the least-squares problem:

$\displaystyle \widehat{\mathbf{a}}(t,x,y)=\arg\min_{\mathbf{a}(t,x,y)}\parallel...
...allel \mathbf{a}(t,x,y)-\mathbf{E}_n\mathbf{\bar{a}_n}(t,x,y)\parallel_{2}^{2},$ (3)

where $ \mathbf{E}_n$ is the similarity matrix, which controls the closeness between the adjacent filters. For the design of $ \mathbf{E}_n$ , we can use the data value and follow three principles:
1. Usage of PF to characterize the energy spectra of data; hence, both the adjacent data and the adjacent PFs are similar based on local plane wave assumption. Therefore, $ \mathbf{E}_n$ should be close to identity matrix.
2. Data value is not be used alone in the expression of $ \mathbf{E}_n$ ; otherwise, the calculation will be unstable because there exists large number of data with zero value in the missing seismic data.
3. The variation of data value can reasonably control the local smoothness of filter coefficients.

In this study, we designed the $ \mathbf{E}_n$ based on the amplitude difference of the smoothed data:

\begin{equation*}\begin{aligned}\mathbf{E}_n= \begin{bmatrix}1+\delta_{n} * (\ti...
...* (\tilde{d}_{n-i}-\tilde{d}_{n-i+1}) \end{bmatrix} \end{aligned}\end{equation*}

where $ \delta_n$ is the sale factor and $ \tilde{d}$ represent the smooth version of data that are less affected by random noise, e.g., the preprocessed data using Gaussian filter.

In a 3D case, the regularization term in equation 3 should include three directions:

\begin{displaymath}\begin{cases}\par \xi_t\mathbf{a}(t,x,y)\approx\xi_t\mathbf{E...
...,y)\approx\xi_y\mathbf{E}_y\mathbf{a}(t,x,y-1) \par \end{cases}\end{displaymath} (5)

The least-squares solution of equation 3 is:

$\displaystyle \mathbf{a}(t,x,y)=[\mathbf{d}(t,x,y)\mathbf{d}(t,x,y)^{T}+\xi^2\mathbf{I}]^{-1} [d(t,x,y)\mathbf{d}(t,x,y)+\xi^2\mathbf{\tilde{a}}(t,x,y)],$ (6)

where

\begin{equation*}\begin{aligned}\mathbf{\tilde{a}}(t,x,y)&=\frac{\xi_t^2\mathbf{...
...,y-1)} {\xi^2},\\ \xi^2&=\xi_t^2+\xi_x^2+\xi_y^2,\\ \end{aligned}\end{equation*}

and $ \mathbf{I}$ is the identity matrix. The regularization terms $ \xi_n$ should have the same order of magnitude as the data. From equation 7, we can consider $ \xi^2_{n}\mathbf{E}_n$ as a whole term, which provides an adaptive smoothness for the nonstationary PF.

In equation 5, a stable update of SPF requires that the adjacent filter coefficients have the same order of magnitude, and the stable condition is based on the selection of the parameters $ \delta_n$ and $ \xi_n$ . We can calculate the difference between the maximum and minimum values in the data, and $ \delta_n$ is selected as the reciprocal of this difference to guarantee that $ \mathbf{E}_n$ may be close to the identity matrix. Meanwhile, the parameter $ \xi_n$ should be chosen to the constant value between the minimum and maximum values of the data according to the smoothness level of the regularization.

The inverse matrix in equation 6 can be directly calculated without iterative conjugate-gradient method. Sherman-Morrison formula (Hager, 1989) provided an analytic solution for the inverse of a special matrix like $ (\mathbf{A}-\mathbf{BC})^{-1}$ , where matrix $ \mathbf{B}$ is a column vector and matrix $ \mathbf{C}$ is a row vector. If $ \mathbf{A}$ and $ \mathbf{I}-\mathbf{C}\mathbf{A}^{-1}\mathbf{B}$ are invertible, the inverse matrix results in:

$\displaystyle (\mathbf{A}-\mathbf{BC})^{-1}=\mathbf{A}^{-1}+ \mathbf{A}^{-1}\ma...
...\mathbf{I}-\mathbf{C}\mathbf{A}^{-1}\mathbf{B})^{-1} \mathbf{C}\mathbf{A}^{-1}.$ (8)

In this paper, $ \mathbf{A}=\xi^2\mathbf{I}$ , $ \mathbf{B}=-\mathbf{d}(t,x,y),$ and $ \mathbf{C}=\mathbf{d}(t,x,y)^{T}$ in equation 8. After algebraic simplification, the filter coefficients arrive at the explicit solution as given below:

$\displaystyle \mathbf{a}(t,x,y)=\mathbf{\bar{a}}(t,x,y)+ \frac{ d(t,x,y)-\mathb...
...bar{a}}(t,x,y)} { \xi^2+\mathbf{d}(t,x,y)^T\mathbf{d}(t,x,y)}\mathbf{d}(t,x,y),$ (9)


next up previous [pdf]

Next: Step 2: Data interpolation Up: Theory Previous: Theory

2022-04-12