A new paper is added to the collection of reproducible documents: Data-driven time-frequency analysis of seismic data using non-stationary Prony method
The empirical mode decomposition aims to decompose the input signal into a small number of components named intrinsic mode functions with slowly varying amplitudes and frequencies. In spite of its simplicity and usefulness, however, the empirical mode decomposition lack solid mathematical foundation. In this paper, we describe a method to extract the intrinsic mode functions of the input signal using non-stationary Prony method. The proposed method captures the philosophy of the empirical mode decomposition, but use a different method to compute the intrinsic mode functions. Having the intrinsic mode functions obtained, we then compute the spectrum of the input signal using Hilbert transform. Synthetic and field data validate the proposed method can correctly compute the spectrum of the input signal, and could be used in seismic data analysis to facilitate interpretation.
A new paper is added to the collection of reproducible documents: Enhancing seismic reflections using empirical mode decomposition in the flattened domain
Due to different reasons, the seismic reflections are not continuous even when no faults or no discontinuities exist. We propose a novel approach for enhancing the amplitude of seismic reflections and making the seismic reflections continuous. We use plane-wave flattening technique to provide horizontal events for the following empirical mode decomposition (EMD) based smoothing in the flattened domain. The inverse plane-wave flattening can be used to obtain original curved events. The plane-wave flattening process requires a precise local slope estimation, which is provided by the plane-wave destruction (PWD) algorithm. The EMD based smoothing filter is a non-parametric and adaptive filter, thus can be conveniently used. Both pre-stack and post-stack field data examples show tremendous improvement for the data quality, which makes the following interpretation easier and more reliable.
A new paper is added to the collection of reproducible documents: Application of principal component analysis in weighted stacking of seismic data
Optimal stacking of multiple datasets plays a significant role in many scientific domains. The quality of stacking will affect the signal-to-noise ratio (SNR) and amplitude fidelity of the stacked image. In seismic data processing, the similarity-weighted stacking makes use of the local similarity between each trace and a reference trace as the weight to stack the flattened prestack seismic data after normal moveout (NMO) correction. The traditional reference trace is an approximated zero-offset trace that is calculated from a direct arithmetic mean of the data matrix along the spatial direction. However, in the case that the data matrix contains abnormal mis-aligned trace, erratic and non-gaussian random noise, the accuracy of the approximated zero-offset trace would be greatly affected, thereby further influence the quality of stacking. We propose a novel weighted stacking method that is based on principal component analysis (PCA). The principal components of the data matrix, namely the useful signals, are extracted based on a low-rank decomposition method by solving an optimization problem with a low-rank constraint. The optimization problem is solved via a common singular value decomposition algorithm. The low-rank decomposition of the data matrix will alleviate the influence of abnormal trace, erratic and non-gaussian random noise, thus will be more robust than the traditional alternatives. We use both synthetic and field data examples to show the successful performance of the proposed approach.