next up previous [pdf]

Next: Synthetic data tests Up: Theory Previous: Missing data interpolation

Step 2: Data interpolation with adaptive PEF

In the second step, a similar problem is solved, except that the filter is known, and the missing traces are unknown. In the decimated-trace interpolation problem, we squeeze (by throwing away alternate zeroed rows and columns) the filter in equation 3 to its original size and then formulate the least-squares problem,

$\displaystyle \widehat{S}(t,x) = \arg\min_{S}\Vert S(t,x)-\sum_{n=1}^{N} \widehat{B_n}(t,x)S_n(t,x)\Vert _2^2\;,$ (14)

subject to

$\displaystyle \widehat{S}(t,x_k) = S_{known}(t,x_k)\;,$ (15)

where $ \widehat{S}(t,x)$ represents the interpolated output, and $ i$ and $ j$ use the original shift as the interval; i.e., the shift interval equals 1.

We carry out the minimization in equations 4, 13, and 14 by the conjugate gradient method (Hestenes and Stiefel, 1952). The constraint condition (equation 15) is used as the initial model and constrains the output by using the known traces for each iteration in the conjugate-gradient scheme. The computational cost is proportional to $ N_{iter}\times N_{f}\times N_{t}\times N_{x}$ , where $ N_{iter}$ is the number of iterations, $ N_f$ is the filter size, and $ N_t \times
N_x$ is the data size. In our tests, $ N_f$ and $ N_{iter}$ were approximately equal to 100. Increasing the smoothing radius in shaping regularization decreases $ N_{iter}$ in the filter estimation step.


next up previous [pdf]

Next: Synthetic data tests Up: Theory Previous: Missing data interpolation

2013-07-26