Introduction

Seismic data is inevitably corrupted by random noise in field acquisition, with important consequences for oil and gas exploration. Thus, random noise attenuation plays a fundamental role in seismic data processing and interpretation (Gulunay, 2000; Li et al., 2016a; Gan et al., 2016d; Zhuang et al., 2015; Qu et al., 2015; Li et al., 2016b). Over the past few decades, a large number of denoising methods for random noise have been developed. Prediction based methods utilize the predictable property of useful signals to construct prediction filters for enhancing signals and rejecting noise, for example, t-x predictive filtering (Abma and Claerbout, 1995), f-x deconvolution (Canales, 1984), the forward-backward prediction approach (Wang, 1999), the polinomial fitting based approach (Liu et al., 2011), non-stationary predictive filtering (Liu and Chen, 2013; Liu et al., 2012). Mean and median filters utilize the statistical difference between signal and noise to reject the Gaussian white noise or impulsive noise (Liu, 2013; Liu et al., 2009b; Gan et al., 2016c). Decomposition based approaches decompose the noisy seismic data into different components and then select the principal components to represent the useful signals. Empirical mode decomposition and its variations (Huang et al., 1998), singular value decomposition based approaches (Gan et al., 2015a; Bekara and van der Baan, 2007; Chen and Ma, 2014), regularized non-stationary decomposition based approaches (Fomel, 2013) are usually used to extract the useful components in multidimensional seismic data. Rank-reduction based approaches assume the seismic data to be low-rank after some data rearrangement steps, such methods include the Cadzow filtering (Trickett, 2008), principal component analysis (Huang et al., 2016b), singular spectrum analysis (Oropeza and Sacchi, 2011; Huang et al., 2017), damped singular spectrum analysis (Chen et al., 2016b; Huang et al., 2016a; Zhang et al., 2016; Chen et al., 2016c).

The sparse transform based random noise attenuation is one of the most widely used approaches (Zhang et al., 2015; Chen, 2016). Not only in seismic data processing, but also in all image processing fields, transformed domain thresholding approach has achieved very successful performance (Cai et al., 2013; Protter and Elad, 2009). The denoising step can be implemented by simply applying a thresholding operator in the transformed sparse domain, followed by an inverse sparse transform. Sparse transform can be divided into two types: analytical transform, which has an exact basis, and learning-based dictionary, which iteratively updates the basis by learning (Chen et al., 2016a). I will use transform and dictionary to refer to these two types of sparse transform, respectively, in this paper.

A lot of transforms have been used in denoising seismic data. Gao et al. (2006) used the wavelet transform to denoise prestack seismic data. Wang et al. (2008) used the second-generation wavelet transform, which is based on the lifting scheme, to denoise seismic data with a percentile thresholding strategy. Hennenfent and Herrmann (2006) and Neelamani et al. (2008) applied the curvelet transform to attenuate both random and coherent noise in seismic data. Zu et al. (2016) applied the curvelet transform to separate simultaneous sources based on the iterative soft-thresholding algorithm. Fomel and Liu (2010) designed a sparse transform that is tailored specifically for seismic data, which is called seislet transform, for sparse representation based processing of seismic data, including seismic denoising (Chen, 2016; Wu et al., 2016), seismic deblending (Chen et al., 2014; Gan et al., 2016b), and data restoration (Liu et al., 2016; Gan et al., 2016a,2015b). Chen and Fomel (2015a) used the adaptive separation properties of empirical mode decomposition (EMD) (Huang et al., 1998) for preparing the stable input for the non-stationary 1D seislet transform and proposed a new EMD-seislet transform to denoise seismic data with strong spatial heterogeneity. Recently, Kong and Peng (2015) applied the shearlet transform to seismic random noise attenuation.

The learning-based dictionaries are becoming more and more popular for seismic data processing in recent years since their superior performances in adaptively learning the basis that can sparsely represent the complicated seismic data (Sahoo and Makur, 2013). Kaplan et al. (2009) used a data-driven sparse-coding algorithm to adaptively learn basis functions in order to sparsely represent seismic data and then perform denoising in the transformed domain. Based on a variational sparse-representation model, Beckouche and Ma (2014) proposed a denoising approach by adaptively learning dictionaries from noisy seismic data. Chen et al. (2016a) combined the learning based dictionaries and the fixed-basis transforms and proposed a double-sparsity dictionary to better handle the special features of seismic data, which can can separate signals and noise more precisely.

K-SVD is one of the most effective dictionary learning algorithms (Aharon et al., 2006). However, the computational cost which requires thousands of SVD hinders its wide application in seismic data processing, especially in practical 3D or 5D problems. In this paper, I propose to apply a fast dictionary learning algorithm, which is called sequential generalized K-means (SGK) algorithm (Sahoo and Makur, 2013), to denoise multidimensional seismic data. Since sparse code is relatively new to the seismic community, I introduce the basic formulation of a sparse representation problem and mathematically analyze the principle of K-SVD algorithm and clarify its computational bottleneck. Then, I also introduce the SGK algorithm in detail and apply both K-SVD and SGK algorithms to denoise multidimensional seismic data. Three examples show that the SGK algorithm can significantly accelerate the dictionary learning process and cause no observably worse denoising performance.


2020-04-03