Documentation

Ground-roll noise attenuation using local orthogonalization

November 24, 2015 Documentation 4 comments

A new paper is added to the collection of reproducible documents: Ground-roll noise attenuation using a simple and effective approach based on local bandlimited orthogonalization

Bandpass filtering is a common way to estimate ground-roll noise on land seismic data, because of the relatively low frequency content of ground-roll. However, there is usually a frequency overlap between ground-roll and the desired seismic reflections that prevents bandpass filtering alone from effectively removing ground-roll without also harming the desired reflections. We apply a bandpass filter with a relatively high upper bound to provide an initial, imperfect separation of ground-roll and reflection signal. We then apply a technique called ‘local orthogonalization’ to improve the separation. The procedure is easily implemented, since it involves only bandpass filtering and a regularized division of the initial signal and noise estimates. We demonstrate the effectiveness of the method on an open-source set of field data.

Selective hybrid approach using f-x EMD

November 23, 2015 Documentation No comments

A new paper is added to the collection of reproducible documents: Random noise attenuation by a selective hybrid approach using f-x empirical mode decomposition

Empirical mode decomposition (EMD) becomes attractive recently for random noise attenuation because of its convenient implementation and ability in dealing with non-stationary seismic data. In this paper, we summarize the existing use of EMD in seismic data denoising and introduce a general hybrid scheme which combines $f-x$ EMD with a dipping-events retrieving operator. The novel hybrid scheme can achieve a better denoising performance compared with the conventional $f-x$ EMD and selected dipping event retriever. We demonstrate the strong horizontal-preservation capability of $f-x$ EMD that makes the EMD based hybrid approach attractive. When $f-x$ EMD is applied to a seismic profile, all the horizontal events will be preserved, while leaving few dipping events and random noise in the noise section, which can be dealt with easily by applying a dipping-events retrieving operator to a specific region for preserving the useful dipping signal. This type of incomplete hybrid approach is termed as selective hybrid approach. Two synthetic and one post-stack field data examples demonstrate a better performance of the proposed approach.

Deblending Using SVMF

November 23, 2015 Documentation No comments

A new paper is added to the collection of reproducible documents: Deblending using a space-varying median filter

Deblending is a currently popular method for dealing with simultaneous-source seismic data. Removing blending noise while preserving as much useful signal as possible is the key to the deblending process. In this paper, I propose to use space-varying median filter (SVMF) to remove blending noise. I demonstrate that this filtering method preserves more useful seismic reflection than does the conventional version of median filter (MF). In SVMF, I use signal reliability (SR) as a reference to pick up the blending spikes and increase the window length in order to attenuate the spikes. When useful signals are identified, the window length is decreased in order to preserve more energy. The SR is defined as the local similarity between the data initially filtered using MF and the original noisy data. In this way, SVMF can be regionally adaptive, instead of rigidly using a constant window length through the whole profile for MF. Synthetic and field-data examples demonstrate excellent performance for my proposed method.

Velocity-dependent seislet transform

October 25, 2015 Documentation No comments

A new paper is added to the collection of reproducible documents: Signal and noise separation in prestack seismic data using velocity-dependent seislet transform

The seislet transform is a wavelet-like transform that analyzes seismic data by following varying slopes of seismic events across different scales and provides a multiscale orthogonal basis for seismic data. It generalizes the discrete wavelet transform (DWT) in the sense that DWT in the lateral direction is simply the seislet transform with a zero slope. Our earlier work used plane-wave destruction (PWD) to estimate smoothly varying slopes. However, PWD operator can be sensitive to strong noise interference, which makes the seislet transform based on PWD (PWD-seislet transform) occasionally fail in providing a sparse multiscale representation for seismic field data. We adopt a new velocity-dependent (VD) formulation of the seislet transform, where the normal moveout equation serves as a bridge between local slope patterns and conventional moveout parameters in the common-midpoint (CMP) domain. The velocity-dependent (VD) slope has better resistance to strong random noise, which indicates the potential of VD seislets for random noise attenuation under 1D earth assumption. Different slope patterns for primaries and multiples further enable a VD-seislet frame to separate primaries from multiples when the velocity models of primaries and multiples are well disjoint. Results of applying the method to synthetic and field-data examples demonstrate that the VD-seislet transform can help in eliminating strong random noise. Synthetic and field-data tests also show the effectiveness of the VD-seislet frame for separation of primaries and pegleg multiples of different orders.

Anelliptic approximations in TI and orthorhombic media

September 15, 2015 Documentation No comments

A new paper is added to the collection of reproducible documents: On anelliptic approximations for qP velocities in TI and orthorhombic media

Anelliptic approximations for phase and group velocities of qP waves in transversely isotropic (TI) media have been widely applied in various seismic data processing and imaging tasks. We revisit previously proposed approximations and suggest two improvements. The first improvement involves finding an empirical connection between anelliptic parameters along different fitting axes based on laboratory measurements of anisotropy of rock samples of different types. The relationship between anelliptic parameters observed is strongly linear suggesting a novel set of anisotropic parameters suitable for the study of qP-wave signatures. The second improvement involves suggesting a new functional form for the anelliptic parameter term to achieve better fitting along the horizontal axis. These modifications lead to improved three-parameter and four-parameter approximations for phase and group velocities of qP waves in TI media. In a number of model comparisons, the new three-parameter approximations appear to be more accurate than previous approximations with the same number of parameters. These modifications also serve as a foundation for an extension to orthorhombic media where qP velocities involve nine independent elastic parameters. However, as shown by previous researchers, qP wave propagation in orthorhombic media can be adequately approximated using just six combinations of those nine parameters. We propose novel six-parameter approximations for phase and group velocities for qP waves in orthorhombic media. The proposed orthorhombic phase-velocity approximation provides a more accurate alternative to previously known approximations and can find applications in full-wave modeling, imaging, and inversion. The proposed group-velocity approximation is also highly accurate and can find applications in ray tracing and velocity analysis.

Deblending with multiple constraints

September 15, 2015 Documentation No comments

A new paper is added to the collection of reproducible documents: Iterative deblending with multiple constraints based on shaping regularization

It has been shown previously that blended simultaneous-source data can be successfully separated using an iterative seislet thresholding algorithm. In this paper, I combine the iterative seislet thresholding with the local orthogonalization technique via the shaping regularization framework. During the iterations, the deblended data and its blending noise section are not orthogonal to each other, indicating that the noise section contains significant coherent useful energy. Although the leakage of useful energy can be retrieved by updating the deblended data from the data misfit during many iterations, I propose to accelerate the retrieval of leakage energy using iterative orthogonalization. It is the first time that multiple constraints are applied in an underdetermined deblending problem and the new proposed framework can overcome the drawback of low-dimensionality constraint in the traditional 2D deblending problem. Simulated synthetic and field data examples show superior performance of the proposed approach.

Similarity-weighted semblance

June 25, 2015 Documentation No comments

A new paper is added to the collection of reproducible documents:
Velocity analysis using similarity-weighted semblance

Weighted semblance can be used for improving the performance of the traditional semblance for specific datasets. We propose a novel approach for prestack velocity analysis using weighted semblance. The novelty comes from a different weighting criteria in which the local similarity between each trace and a reference trace is used. On one hand, low similarity corresponds to a noise point or a point indicating incorrect moveout, which should be given a small weight. On the other hand, high similarity corresponds to a point indicating correct moveout, which should be given a high weight. The proposed approach can also be effectively used for analyzing AVO anomalies with increased resolution compared with AB semblance. Both synthetic and field CMP gathers demonstrate higher resolution using the proposed approach. Applications of the proposed method on a prestack dataset further confirms that the stacked data using the similarity-weighted semblance can obtain better energy-focused events, which indicates a more precise velocity picking.

Test case for PEF estimation

June 24, 2015 Documentation No comments

Another old paper is added to the collection of reproducible documents:
Test case for PEF estimation with sparse data II

The two-stage missing data interpolation approach of Claerbout (1998) (henceforth, the GEE approach) has been applied with great success (Fomel et al., 1997; Clapp et al., 1998; Crawley, 2000) in the past. The main strength of the approach lies in the ability of the prediction error filter (PEF) to find multiple, hidden correlation in the known data, and then, via regularization, to impose the same correlation (covariance) onto the unknown model. Unfortunately, the GEE approach may break down in the face of very sparsely-distributed data, as the number of valid regression equations in the PEF estimation step may drop to zero. In this case, the most common approach is to simply retreat to regularizing with an isotropic differential filter (e.g., Laplacian), which leads to a minimum-energy solution and implicitly assumes an isotropic model covariance.
A pressing goal of many SEP researchers is to find a way of estimating a PEF from sparse data. Although new ideas are certainly required to solve this interesting problem, Claerbout (2000) proposes that a standard, simple test case first be constructed, and suggests using a known model with vanishing Gaussian curvature. In this paper, we present the following, simpler test case, which we feel makes for a better first step.

  • Model: Deconvolve a 2-D field of random numbers with a simple dip filter, leading to a “plane-wave” model.
  • Filter: The ideal interpolation filter is simply the dip filter used to create the model.
  • Data: Subsample the known model randomly and so sparsely as to make conventional PEF estimation impossible.

We use the aforementioned dip filter to regularize a least squares estimation of the missing model points and show that this filter is ideal, in the sense that the model residual is relatively small. Interestingly, we found that the characteristics of the true model and interpolation result depended strongly on the accuracy (dip spectrum localization) of the dip filter. We chose the 8-point truncated sinc filter presented by Fomel (2000). We discuss briefly the motivation for this choice.

Double-elliptic approximation in TI media

June 16, 2015 Documentation No comments

Another old paper is added to the collection of reproducible documents:
The double-elliptic approximation in the group and phase domains

Elliptical anisotropy has found wide use as a simple approximation to transverse isotropy because of a unique symmetry property (an elliptical dispersion relation corresponds to an elliptical impulse response) and a simple relationship to standard geophysical techniques (hyperbolic moveout corresponds to elliptical wavefronts; NMO measures horizontal velocity, and time-to-depth conversion depends on vertical velocity). However, elliptical anisotropy is only useful as an approximation in certain restricted cases, such as when the underlying true anisotropy does not depart too far from ellipticity or the observed angular aperture is small. This limitation is fundamental, because there are only two parameters needed to define an ellipse: the horizontal and vertical velocities. (Sometimes the orientation of the principle axes is also included as a free parameter, but usually not.)
In a previous SEP report Muir (1990) showed how to extend the standard elliptical approximation to a so-called double-elliptic form. (The relation between the elastic constants of a TI medium and the coefficients of the corresponding double-elliptic approximation is developed in a companion paper, (Muir, 1991).) The aim of this new approximation is to preserve the useful properties of elliptical anisotropy while doubling the number of free parameters, thus allowing a much wider range of transversely isotropic media to be adequately fit. At first glance this goal seems unattainable: elliptical anisotropy is the most complex form of anisotropy possible with a simple analytical form in both the dispersion relation and impulse response domains. Muir’s approximation is useful because it nearly satisfies both incompatible goals at once: it has a simple relationship to NMO and true vertical and horizontal velocity, and to a good approximation it has the same simple analytical form in both domains of interest.
The purpose of this short note is to test by example how well the double-elliptic approximation comes to meeting these goals:

  1. Simple relationships to NMO and true velocities on principle axes.
  2. Simple analytical form for both the dispersion relation and impulse response.
  3. Approximates general transversely isotropic media well.

The results indicate that the method should work well in practice.

Amplitude balancing

May 9, 2015 Documentation No comments

Another old paper is added to the collection of reproducible documents:
Iterative least-square inversion for amplitude balancing

Variations in source strength and receiver amplitude can introduce a bias in the final AVO analysis of prestack seismic reflection data. In this paper we tackle the problem of the amplitude balancing of the seismic traces from a marine survey. We start with a 2-D energy map from which the global trend has been removed. In order to balance this amplitude map, we first invert for the correction coefficients using an iterative least-square algorithm. The coefficients are calculated for each shot position along the survey line, each receiver position in the recording cable, and each offset. Using these coefficients, we then correct the original amplitude map for amplitude variations in the shot, receiver, and offset directions.