Examples

We use one synthetic example and one field data example to demonstrate the interpolation effect using the proposed approach. The first synthetic example is a combination of linear reflectors, curved reflectors and faults. The original data and decimated data with 30% randomly removed traces are shown in Figures 3a and 3b, respectively. After $f-k$ based POCS and FPOCS, and seislet based POCS and FPOCS, the reconstructed data and their corresponding error sections using different approaches are shown in Figures 4 and 6, respectively. In this paper, we use the percentile thresholding strategy (Chen et al., 2014a). The percentile thresholding refers to using a constant percentage of maximum coefficients during the iterations. For the first example, 15 % coefficients are preserved during the iterations. The percentile thresholding refers to using a constant percentage of maximum coefficients during the iterations. We zoom in the frame boxes as shown in Figures 3 and 4, and show the zoomed sections in Figure 8 for better comparison. In order to numerically compare the performance, we use the signal-to-noise ratio (SNR): $SNR=10\log_{10}(\Arrowvert\mathbf{d}_{true}\Arrowvert_2^2/\Arrowvert\mathbf{d}_{true}-\mathbf{d}_{inter}\Arrowvert_2^2)$ , which is widely used in the literature to numerically measure the data recovery error (Liu et al., 2009a; Yang et al., 2015). We plot the SNR diagram of the first example in Figure 9. As can be seen from the comparison of different approaches, the seislet based POCS and FPOCS obviously perform better than $f-k$ based POCS and FPOCS, according to both SNR comparison and visual observation. It should be mentioned that the superior performance of FPOCS algorithm depends on the parameter selection (percentage of coefficients in the transformed domain) in the percentile thresholding strategy. Figure 9 shows the performances of different algorithms using the best selected parameter (the percentage is 15 %), while Figure 10 shows the performance of different algorithms using inappropriate parameter (the percentage is 20 %). It is obvious that in Figure 10, both seislet based FPOCS and $f-k$ based FPOCS go through an SNR increasing and then decreasing process. Thus, when using the FPOCS algorithm in practice, we might need to have several parameters tuning process in order to select the best parameter. The reconstructed results using FPOCS and POCS are nearly the same. From the convergence diagram, we can confirm that the FPOCS and POCS converge to the same SNR but FPOCS converges much faster. Figure 11a shows the amplitude comparison for the 175th trace (as highlighted in Figures 3a, 4b and 4d). It is also obvious that the seislet based FPOCS approach can obtain better performance than the $f-k$ based FPOCS approach. In addition to the SNR comparison, we also use another newly developed way to measure the recovery performance: the local similarity. The local similarity was initially used to measure the signal reconstruction in the noise attenuation problem (Chen and Fomel, 2015). Here, we borrow the same way to numerically measure the data reconstruction performance. Appendix A gives a short review of the local similarity and its calculation. We use the local similarity in two ways: 1) to calculate the local similarity between the true data and the reconstructed data; 2) to calculate the local similarity between the true data and the estimation error. The higher the former similarity is, the better reconstruction performance is, since such similarity measure the correctness. However, the higher the latter similarity is, the worse reconstruction performance is, since such similarity magnifies the error. Figure 5 shows the local similarity between the true dataset and the reconstructed data. The two similarity maps on the bottom row obviously contains fewer anomalies than that on the top row, indicating that the two $f$-$k$ based approaches cause more error than the seislet based approaches. Please note that the the maximum local similarity is 1, which means that the two compared signals are exactly the same. Figure 7 shows the local similarity between the true dataset and the reconstruction error. The two similarity maps on the bottom show small values while the two maps on the top row show high-value anomalies, further confirming the superior performance using the seislet-based approach. Figure 11b shows the local similarity comparison (between the reconstructed trace and the true trace), which again confirms that the reconstructed trace using the seislet-based approach is much more similar to the true data.

In order to compare the difference between IST and POCS (or between FPOCS and FISTA). We do two experiments with clean and noisy irregularly sampled datasets, respectively. For the clean data case, we use the same synthetic example as shown in Figures 3a and 12a. We reconstruct the missing data with seislet POCS, seislet FPOCS, seislet IST, seislet FISTA, and show the convergence diagrams in terms of SNR in Figure 12b. It is obvious that both seislet POCS and seislet FPOCS obtain better converged results than seislet IST and seislet FISTA, which results from the fact that in POCS based methods the sampled clean data components help constrain the model during the inversion. However, for the noisy data case, as shown in Figure 12c, the convergence diagrams show opposite performance compared with the clean data case. Both seislet POCS and seislet FPOCS obtain better converged results than seislet IST and seislet FISTA. This phenomenon results from the fact that during the inversion, the IST based methods can help remove the random noise iteratively while the POCS based approaches maintain the random noise in the known data components. The conclusion from these two experiments can guide us to decide which type of method (POCS or IST) to use in practice according to the level of noise existing in the seismic data: for relatively noisier dataset, the IST (FISTA) method can be superior because during the thresholding process, the extra random noise will be attenuated gradually; for relatively cleaner dataset, the POCS (FPOCS) method can be superior because the known sampled data help constrain the spatial coherency during the inversion. Currently, we do not have ways to quantify the noise level acceptable for FPOCS. It might be an interesting topic for future investigation.. The next field data example is from a marine survey, with high data quality, thus we keep using the POCS based approach for comparison

The second example is a field data example, as shown in Figures 13a. The incomplete data by randomly removing 30% traces is shown in Figure 13b. The reconstructed results for the field data example are shown in Figures 14. For the field data example, 18 % coefficients are preserved during the iterations. The error sections using different approaches are shown in Figure 16. It is obvious that the reconstructed data using seislet based approach are much more coherent than $f-k$ based and have less reconstruction error. We also show zoomed sections in Figure 18 for better comparison. The SNR diagrams are shown in Figure 19. It also shows a similar conclusion that is consistent with that from the previous synthetic example: the seislet-based approaches can obtain better reconstruction performance and the the FPOCS can obtain much faster convergence. Because $f-k$ transform is a global transform, $f-k$ based approaches will cause artifacts outside the main data structure, as can be seen at the top of Figure 14b. Since the seislet transform is a local transform, it will not cause such artifacts. Figure 20a shows the amplitude comparison for the 48th trace (as highlighted in Figures 13a, 14b and 14d). It is obvious that the seislet based FPOCS has less reconstruction error compared with the $f-k$ based FPOCS. Figure 15 shows the local similarity between the true field data and the reconstructed datasets. The two similarity maps on the bottom row have relatively smaller values than that on the top row, indicating that the seislet based approaches can obtain more accurate reconstructed results. The low-value anomalies in the deep-water part are caused by the low-amplitude (nearly zero) deep-water reflections and should not be taken into the consideration in evaluating different performance. Figure 17 shows the local similarity between the true dataset and the reconstruction error. The two similarity maps on the bottom show small values while the two maps on the top row show high-value anomalies, further confirming the superior performance using the seislet-based approach. Figure 20b shows the local similarity comparison for the field data example (between the reconstructed trace and the true trace), which further confirms the aforementioned conclusions.

sigmoid-1 sigmoid-zero-0
sigmoid-1,sigmoid-zero-0
Figure 3.
(a) Synthetic data. (b) Decimated synthetic data with 30% removed traces.
[pdf] [pdf] [png] [png] [scons]

data-pocs-fft-0 data-fpocs-fft-1 data-pocs-seis-0 data-fpocs-seis-1
data-pocs-fft-0,data-fpocs-fft-1,data-pocs-seis-0,data-fpocs-seis-1
Figure 4.
(a)-(d) Reconstructed sections corresponding to POCS with $f$-$k$ thresholding, FPOCS with $f$-$k$ thresholding, POCS with seislet thresholding and FPOCS with seislet thresholding.
[pdf] [pdf] [pdf] [pdf] [png] [png] [png] [png] [scons]

simi-pocs-fft simi-fpocs-fft simi-pocs-seis simi-fpocs-seis
simi-pocs-fft,simi-fpocs-fft,simi-pocs-seis,simi-fpocs-seis
Figure 5.
Local similarity between the reconstructed sections with the true data using different approaches. (a) $f$-$k$ POCS. (b) $f$-$k$ FPOCS. (c) Seislet POCS. (d) Seislet FPOCS.
[pdf] [pdf] [pdf] [pdf] [png] [png] [png] [png] [scons]

diff-pocs-fft diff-fpocs-fft diff-pocs-seis diff-fpocs-seis
diff-pocs-fft,diff-fpocs-fft,diff-pocs-seis,diff-fpocs-seis
Figure 6.
(a)-(d) Reconstruction error sections corresponding to Figures 4a-4d, respectively.
[pdf] [pdf] [pdf] [pdf] [png] [png] [png] [png] [scons]

simi-diff-pocs-fft simi-diff-fpocs-fft simi-diff-pocs-seis simi-diff-fpocs-seis
simi-diff-pocs-fft,simi-diff-fpocs-fft,simi-diff-pocs-seis,simi-diff-fpocs-seis
Figure 7.
Local similarity between the error sections with the true data using different approaches. (a) $f$-$k$ POCS. (b) $f$-$k$ FPOCS. (c) Seislet POCS. (d) Seislet FPOCS.
[pdf] [pdf] [pdf] [pdf] [png] [png] [png] [png] [scons]

data-z data-zero-z data-pocs-fft-z data-fpocs-fft-z data-pocs-seis-z data-fpocs-seis-z
data-z,data-zero-z,data-pocs-fft-z,data-fpocs-fft-z,data-pocs-seis-z,data-fpocs-seis-z
Figure 8.
Zoomed sections. (a) True data. (b) Decimated data. (c) POCS with $f$-$k$ thresholding. (d) FPOCS with $f$-$k$ thresholding. (e) POCS with seislet thresholding. (f) FPOCS with seislet thresholding.
[pdf] [pdf] [pdf] [pdf] [pdf] [pdf] [png] [png] [png] [png] [png] [png] [scons]

snrs
snrs
Figure 9.
SNR comparison of the synthetic example, when the best parameter is selected.
[pdf] [png] [scons]

snrs0
snrs0
Figure 10.
SNR comparison of the synthetic example, when the inappropriate parameter is selected.
[pdf] [png] [scons]

trace-comp tsimi-comp
trace-comp,tsimi-comp
Figure 11.
(a) Amplitude comparison. (b) Local similarity comparison. Black solid line denotes the true trace. Red dot dashed line denotes FPOCS with seislet thresholding. Green dashed line denotes FPOCS using $f$-$k$ thresholding. Note that the amplitude using seislet thresholding is much closer to the true amplitude and the local similarity using seislet thresholding is much closer to the maximum similarity: 1.
[pdf] [pdf] [png] [png] [scons]

sigmoid-zero istpocs-snrs sigmoidn-zero istpocsnoise-snrs
sigmoid-zero,istpocs-snrs,sigmoidn-zero,istpocsnoise-snrs
Figure 12.
(a) Clean synthetic data. (b) Comparison of POCS (FPOCS) and IST (FISTA) of clean data in terms of SNR. (c) Noisy synthetic data. (d) Comparison of POCS (FPOCS) and IST (FISTA) of noisy data in terms of SNR.
[pdf] [pdf] [pdf] [pdf] [png] [png] [png] [png] [scons]

sean-1 sean-zero-0
sean-1,sean-zero-0
Figure 13.
Field data example. (a)True field data. (b) Decimated field data with 30 % removed traces.
[pdf] [pdf] [png] [png] [scons]

sean-pocs-fft-0 sean-fpocs-fft-1 sean-pocs-seis-0 sean-fpocs-seis-1
sean-pocs-fft-0,sean-fpocs-fft-1,sean-pocs-seis-0,sean-fpocs-seis-1
Figure 14.
Field data example. (a)-(d) Reconstructed sections corresponding to POCS with $f$-$k$ thresholding, FPOCS with $f$-$k$ thresholding, POCS with seislet thresholding and FPOCS with seislet thresholding. Note the artifacts caused by the $f$-$k$ based methods, as pointed out by the arrows.
[pdf] [pdf] [pdf] [pdf] [png] [png] [png] [png] [scons]

sean-simi-pocs-fft sean-simi-fpocs-fft sean-simi-pocs-seis sean-simi-fpocs-seis
sean-simi-pocs-fft,sean-simi-fpocs-fft,sean-simi-pocs-seis,sean-simi-fpocs-seis
Figure 15.
Field data example. Local similarity between the reconstructed sections with the true data using different approaches. (a) $f$-$k$ POCS. (b) $f$-$k$ FPOCS. (c) Seislet POCS. (d) Seislet FPOCS.
[pdf] [pdf] [pdf] [pdf] [png] [png] [png] [png] [scons]

sean-diff-pocs-fft sean-diff-fpocs-fft sean-diff-pocs-seis sean-diff-fpocs-seis
sean-diff-pocs-fft,sean-diff-fpocs-fft,sean-diff-pocs-seis,sean-diff-fpocs-seis
Figure 16.
Field data example. (a)-(d) Reconstruction error sections corresponding to Figures 14a-14d, respectively.
[pdf] [pdf] [pdf] [pdf] [png] [png] [png] [png] [scons]

sean-simi-diff-pocs-fft sean-simi-diff-fpocs-fft sean-simi-diff-pocs-seis sean-simi-diff-fpocs-seis
sean-simi-diff-pocs-fft,sean-simi-diff-fpocs-fft,sean-simi-diff-pocs-seis,sean-simi-diff-fpocs-seis
Figure 17.
Field data example. Local similarity between the error sections with the true data using different approaches. (a) $f$-$k$ POCS. (b) $f$-$k$ FPOCS. (c) Seislet POCS. (d) Seislet FPOCS.
[pdf] [pdf] [pdf] [pdf] [png] [png] [png] [png] [scons]

sean-z sean-zero-z sean-pocs-fft-z sean-fpocs-fft-z sean-pocs-seis-z sean-fpocs-seis-z
sean-z,sean-zero-z,sean-pocs-fft-z,sean-fpocs-fft-z,sean-pocs-seis-z,sean-fpocs-seis-z
Figure 18.
Zoomed sections of the field data example. (a) True data. (b) Decimated data. (c) POCS with $f$-$k$ thresholding. (d) FPOCS with $f$-$k$ thresholding. (e) POCS with seislet thresholding. (f) FPOCS with seislet thresholding.
[pdf] [pdf] [pdf] [pdf] [pdf] [pdf] [png] [png] [png] [png] [png] [png] [scons]

sean-snrs
sean-snrs
Figure 19.
SNR comparison of the field data example.
[pdf] [png] [scons]

sean-trace-comp sean-tsimi-comp
sean-trace-comp,sean-tsimi-comp
Figure 20.
Field data example. (a) Amplitude comparison. (b) Local similarity comparison. Black solid line denotes the true trace. Red dot dashed line denotes FPOCS with seislet thresholding. Green dashed line denotes FPOCS using $f$-$k$ thresholding. Note that the amplitude using seislet thresholding is much closer to the true amplitude and the local similarity using seislet thresholding is much closer to the maximum similarity: 1.
[pdf] [pdf] [png] [png] [scons]


2020-04-11