Documentation

Application of principal component analysis in weighted stacking of seismic data

July 18, 2020 Documentation No comments

A new paper is added to the collection of reproducible documents: Application of principal component analysis in weighted stacking of seismic data


Optimal stacking of multiple datasets plays a significant role in many scientific domains. The quality of stacking will affect the signal-to-noise ratio (SNR) and amplitude fidelity of the stacked image. In seismic data processing, the similarity-weighted stacking makes use of the local similarity between each trace and a reference trace as the weight to stack the flattened prestack seismic data after normal moveout (NMO) correction. The traditional reference trace is an approximated zero-offset trace that is calculated from a direct arithmetic mean of the data matrix along the spatial direction. However, in the case that the data matrix contains abnormal mis-aligned trace, erratic and non-gaussian random noise, the accuracy of the approximated zero-offset trace would be greatly affected, thereby further influence the quality of stacking. We propose a novel weighted stacking method that is based on principal component analysis (PCA). The principal components of the data matrix, namely the useful signals, are extracted based on a low-rank decomposition method by solving an optimization problem with a low-rank constraint. The optimization problem is solved via a common singular value decomposition algorithm. The low-rank decomposition of the data matrix will alleviate the influence of abnormal trace, erratic and non-gaussian random noise, thus will be more robust than the traditional alternatives. We use both synthetic and field data examples to show the successful performance of the proposed approach.

Velocity analysis of simultaneous-source data

April 11, 2020 Documentation No comments

A new paper is added to the collection of reproducible documents: Velocity analysis of simultaneous-source data using high-resolution semblance – coping with the strong noise


Direct imaging of simultaneous-source (or blended) data, without the need of deblending, requires a precise subsurface velocity model. In this paper, we focus on the velocity analysis of simultaneous-source data using the NMO-based velocity picking approach. We demonstrate that it is possible to obtain a precise velocity model directly from the blended data in the common-midpoint (CMP) domain. The similarity-weighted semblance can help us obtain much better velocity spectrum with higher resolution and higher reliability compared with the traditional semblance. The similarity-weighted semblance enforces an inherent noise attenuation solely in the semblance calculation stage, thus is not sensitive to the intense interference. We use both simulated synthetic and field data examples to demonstrate the performance of the similarity-weighted semblance in obtaining reliable subsurface velocity model for direct migration of simultaneous-source data. The migrated image of blended field data using prestack kirchhoff time migration (PSKTM) approach based on the picked velocity from the similarity-weighted semblance is very close to the migrated image of unblended data.

Compressive sensing for seismic data reconstruction

April 11, 2020 Documentation No comments

A new paper is added to the collection of reproducible documents: Compressive sensing for seismic data reconstruction via fast projection onto convex sets based on seislet transform


According to the compressive sensing (CS) theory in the signal-processing field, we proposed a new CS approach based on a fast projection onto convex sets (POCS) algorithm with sparsity constraint in the seislet transform domain. The seislet transform appears to be the sparest among the state-of-the-art sparse transforms. The FPOCS can obtain much faster convergence than conventional POCS (about two thirds of conventional iterations can be saved), while maintaining the same recovery performance. The FPOCS can obtain faster and better performance than FISTA for relatively cleaner data but will get slower and worse performance than FISTA, which becomes a reference to decide which algorithm to use in practice according the noise level in the seismic data. The seislet transform based CS approach can achieve obviously better data recovery results than $f-k$ transform based scenarios, considering signal-to-noise ratio (SNR), local similarity comparison, and visual observation, because of a much sparser structure in the seislet transform domain. We have used both synthetic and field data examples to demonstrate the superior performance the proposed seislet-based FPOCS approach.

CUDA package for Q-compensated RTM

April 3, 2020 Documentation No comments

A new paper is added to the collection of reproducible documents: CuQ-RTM: A CUDA-based code package for stable and efficient Q-compensated reverse time migration

Reverse time migration (RTM) in attenuating media should take the absorption and dispersion effects into consideration. The latest proposed viscoacoustic wave equation with decoupled fractional Laplacians (DFLs) facilitates separate amplitude compensation and phase correction in $Q$-compensated RTM ($Q$-RTM). However, intensive computation and enormous storage requirements of $Q$-RTM prevent it from being extended into practical application, especially for large-scale 2D or 3D case. The emerging graphics processing unit (GPU) computing technology, built around a scalable array of multithreaded Streaming Multiprocessors (SMs), presents an opportunity for greatly accelerating $Q$-RTM by appropriately exploiting GPU’s architectural characteristics. We present the cu$Q$-RTM, a CUDA-based code package that implements $Q$-RTM based on a set of stable and efficient strategies, such as streamed CUFFT, checkpointing-assisted time-reversal reconstruction (CATRC) and adaptive stabilization. The cu$Q$-RTM can run in a multi-level parallelism (MLP) fashion, either synchronously or asynchronously, to take advantages of all the CPUs and GPUs available, while maintaining impressively good stability and flexibility. We mainly outline the architecture of the cu$Q$-RTM code package and some program optimization schemes. The speedup ratio on a single GeForce GTX760 GPU card relative to a single core of Intel Core i5-4460 CPU can reach above 80 in large-scale simulation. The strong scaling property of multi-GPU parallelism is demonstrated by performing $Q$-RTM on a Marmousi model with one to six GPU(s) involved. Finally, we further verify the feasibility and efficiency of the cu$Q$-RTM on a field data set. The “living” package is available from GitHub at https://github.com/Geophysics-OpenSource/cuQRTM, and peer-reviewed code related to this article can be found at http://software.seg.org/2019/0001.

Fast dictionary learning

April 3, 2020 Documentation No comments

A new paper is added to the collection of reproducible documents: Fast dictionary learning for noise attenuation of multidimensional seismic data

The K-SVD algorithm has been successfully utilized for adaptively learning the sparse dictionary in 2D seismic denoising. Because of the high computational cost of many SVDs in the K-SVD algorithm, it is not applicable in practical situations, especially in 3D or 5D problems. In this paper, I extend the dictionary learning based denoising approach from 2D to 3D. To address the computational efficiency problem in K-SVD, I propose a fast dictionary learning approach based on the sequential generalized K-means (SGK) algorithm for denoising multidimensional seismic data. The SGK algorithm updates each dictionary atom by taking an arithmetic average of several training signals instead of calculating a SVD as used in K-SVD algorithm. I summarize the sparse dictionary learning algorithm using K-SVD, and introduce SGK algorithm together with its detailed mathematical implications. 3D synthetic, 2D and 3D field data examples are used to demonstrate the performance of both K-SVD and SGK algorithms. It has been shown that SGK algorithm can significantly increase the computational efficiency while only slightly degrading the denoising performance.

Plane-wave orthogonal polynomial transform

March 27, 2020 Documentation No comments

A new paper is added to the collection of reproducible documents: Plane-wave orthogonal polynomial transform for amplitude-preserving noise attenuation

Amplitude-preserving data processing is an important and challenging topic in many scientific fields. The amplitude-variation details in seismic data are especially important because the amplitude variation is directly related with the subsurface wave impedance and fluid characteristics. We propose a novel seismic noise attenuation approach that is based on local plane-wave assumption of seismic events and the amplitude preserving capability of the orthogonal polynomial transform (OPT). The OPT is a way for representing spatially correlative seismic data as a superposition of polynomial basis functions, by which the random noise is distinguished from the useful energy by the high orthogonal polynomial coefficients. The seismic energy is the most correlative along the structural direction and thus the OPT is optimally performed in a flattened gather. We introduce in detail the flattening operator for creating the flattened dimension, where the OPT can be applied subsequently. The flattening operator is created by deriving a plane-wave trace continuation relation following the plane-wave equation. We demonstrate that both plane-wave trace continuation and OPT can well preserve the strong amplitude variation existing in seismic data. In order to obtain a robust slope estimation performance in the presence of noise, a robust slope estimation approach is introduced to substitute the traditional method. A group of synthetic, pre-stack and post-stack field seismic data are used to demonstrate the potential of the proposed framework in realistic applications.

Time-frequency decomposition

March 18, 2020 Documentation No comments

A new paper is added to the collection of reproducible documents: Probing the subsurface karst features using time-frequency decomposition

The high-resolution mapping of karst features is of great importance to hydrocarbon discovery and recovery in the resource exploration field. However, currently, there are few effective methods specifically tailored for such mission. The 3D seismic data can reveal the existence of karsts to some extent but cannot obtain a precise characterization. I propose an effective framework for accurately probing the subsurface karst features using a well-developed time-frequency decomposition algorithm. More specifically, I introduce a frequency interval analysis approach for obtaining the best karsts detection result using an optimal frequency interval. A high resolution time-frequency transform is preferred in the proposed framework to capture the inherent frequency components hidden behind the amplitude map. Although the single frequency slice cannot provide a reliable karst depiction result, the summation over the selected frequency interval can obtain a high-resolution and high-fidelity delineation of subsurface karsts. I use a publicly available 3D field seismic dataset as an example to show the performance of the proposed method.

Spectral decomposition using regularized non-stationary autoregression

March 11, 2020 Documentation No comments

A new paper is added to the collection of reproducible documents: Application of spectral decomposition using regularized non-stationary autoregression to random noise attenuation

We propose an application of spectral decomposition using regularized non-stationary autoregression (SDRNAR) to random noise attenuation. SDRNAR is a recently proposed signal-analysis method, which aims at decomposing the seismic signal into several spectral components, each of which has a smoothly variable frequency and smoothly variable amplitude. In the proposed novel denoising approach, random noise is deemed to be the residual part of decomposed spectral components because it is unpredictable. One unique property of this novel denoising approach is that the amplitude maps for different frequency components can also be obtained during the denoising process, which can be valuable for some interpretation tasks. Compared with spectral decomposition algorithm by empirical mode decomposition (EMD), SDRNAR has higher efficiency and better decomposition performance. Compared with $f$-$x$ deconvolution and mean filter, the proposed denoising approach can obtain higher signal-to-noise ratio (SNR) and preserve more useful energy. The proposed approach can only be applied to seismic profiles with relatively flat events, which becomes its main limitation. However, because it is applied trace by trace, it can preserve spatial discontinuities. We use both synthetic and field data examples to demonstrate the performance of the proposed method.

Rank-reduction 3D seismic data denoising and reconstruction

March 11, 2020 Documentation No comments

A new paper is added to the collection of reproducible documents: An open-source Matlab code package for improved rank-reduction 3D seismic data denoising and reconstruction

Simultaneous seismic data denoising and reconstruction is a currently popular research subject in modern reflection seismology. Traditional rank-reduction based 3D seismic data denoising and reconstruction algorithm will cause strong residual noise in the reconstructed data and thus affect the following processing and interpretation tasks. In this paper, we propose an improved rank reduction method by modifying the truncated singular value decomposition (TSVD) formula used in the traditional method. The proposed approach can help us obtain nearly perfect reconstruction performance even in the case of low signal-to-noise ratio (SNR). The proposed algorithm is tested via one synthetic and field data examples. Considering that seismic data interpolation and denoising source packages are seldom in the public domain, we also provide a program template for the rank reduction based simultaneous denoising and reconstruction algorithm by providing an open-source Matlab package.

Structure-oriented singular value decomposition

March 11, 2020 Documentation No comments

A new paper is added to the collection of reproducible documents: Structure-oriented singular value decomposition for random noise attenuation of seismic data

Singular value decomposition (SVD) can be used both globally and locally to remove random noise in order to improve the signal-to-noise ratio (SNR) of seismic data. However, they can only be applied to seismic data with simple structure such that there is only one dip component in each processing window. We introduce a novel denoising approach that utilizes a structure-oriented SVD and this approach can enhance seismic reflections with continuous slopes. We create a third dimension for a 2D seismic profile by using the plane-wave prediction operator to predict each trace from its neighbour traces and apply SVD along this dimension. The added dimension is equal to flattening the seismic reflections within a neighbouring window. The third dimension is then averaged to decrease the dimension. We use two synthetic examples with different complexities and one field data example to demonstrate the performance of the proposed structure-oriented SVD. Compared with global and local SVDs, and $f-x$ deconvolution, the structure-oriented SVD can obtain much clearer reflections and preserve more useful energy.