Robust time-to-depth conversion

November 9, 2014 Documentation No comments

A new paper is added to the collection of reproducible documents:
A robust approach to time-to-depth conversion and interval velocity estimation from time migration in the presence of lateral velocity variations

The problem of conversion from time-migration velocity to an interval velocity in depth in the presence of lateral velocity variations can be reduced to solving a system of partial differential equations. In this paper, we formulate the problem as a nonlinear least-squares optimization for seismic interval velocity and seek its solution iteratively. The input for inversion is the Dix velocity which also serves as an initial guess. The inversion gradually updates the interval velocity in order to account for lateral velocity variations that are neglected in the Dix inversion. The algorithm has a moderate cost thanks to regularization that speeds up convergence while ensuring a smooth output. The proposed method should be numerically robust compared to the previous approaches, which amount to extrapolation in depth monotonically. For a successful time-to-depth conversion, image-ray caustics should be either nonexistent or excluded from the computational domain. The resulting velocity can be used in subsequent depth-imaging model building. Both synthetic and field data examples demonstrate the applicability of the proposed approach.

Tutorial on parameter testing

October 22, 2014 Examples No comments

The example in rsf/tutorials/parameters reproduces the tutorial from Matt Hall on parameter testing. Madagascar users are encouraged to try improving the results.

In his blog post and in the discussion that follows, Matt brings up an interesting question about finding the best way for parameter selection.

For the lack of a better approach, parameter selection in seismic attributes is just an interactive game. In the Madagascar version, the key parameter for the Canny edge detector is the amount of prior anisotropic-diffusion smoothing, controlled by the smoothing radius (rect= parameter.) We can do different things with it: for example, make a movie of different images looping through different values of the radius, or, by exposing the parameter to the command-line SCons interface, build a simple GUI script for controlling it. The question posted by Matt waits for a better answer.

See also:

Tutorial on colormaps

October 18, 2014 Examples No comments

The example in rsf/tutorials/colormaps reproduces the tutorial from Matteo Niccoli on how to evaluate and compare color maps. The tutorial was published in the August 2014 issue of The Leading Edge. Madagascar users are encouraged to try improving the results.

See also:

Several new color palettes have been recently added to Madagascar (thanks to Aaron Stanton): color=seismic (red-yellow-white-black, popular among seismic interpreters), color=owb (orange-white-black), and color=rwb (red-white-black).

Petition to raise awareness about the role of software in research

October 17, 2014 Links No comments


The Software Sustainability Institute in the UK has created an online petition to “everyone in the research community”, which states “We must accept that software is fundamental to research, or we will lose our ability to make groundbreaking discoveries.”

1. We want software to be treated as a valuable research object which befits the same level of investment and effort as any other aspect of the research infrastructure.
2. We want researchers to be encouraged to spend time learning about software, because the value of that knowledge is understood to improve research.
3. We want the people who develop research software to be recognised and rewarded for their invaluable contribution to research.
4. We want a research environment in which software-reliant projects are encouraged to hire software developers, rather than having to hide these valuable staff members in anonymous postdoctoral positions.
5. Ultimately, we want the research community to recognise software’s fundamental role in research.

You can sign the petition at Change.org.

Program of the month: sfsigmoid

October 8, 2014 Programs No comments

sfsigmoid generates a 2-D synthetic reflectivity model, created by Jon Claerbout.

One of the first occurrences of this model is in SEP-73 sponsor report from 1992, where it appeared in several papers:

  • J. F. Claerbout, 1992, Introduction to Kirchhoff Migration Programs: SEP-73 report, 361-366, Stanford Exploration Project.
  • J. F. Claerbout, 1992, Filling Data Gaps Using a Local Plane-Wave Model: SEP-73 report, 401-408, Stanford Exploration Project.
  • J. F. Claerbout, 1992, Information from Smiles: Mono-Plane-Annihilator Weighted Regression: SEP-73 report, 409-420, Stanford Exploration Project.
  • J. F. Claerbout, 1992, Crossline Regridding by Inversion: SEP-73 report, 421-428, Stanford Exploration Project.

    The model was described as “a synthetic model that illustrates local variation in bedding. Notice dipping bedding, curved bedding, unconformity between them, and a fault in the curved bedding.” Later, the sigmoid model made an appearance in Claerbout’s book Basic Earth Imaging. The following example from bei/krch/sep73 illustrates the effect of aliasing on Kirchhoff modeling and migration:

The model has appeared in numerous other tests. The following example from tccs/flat/flat shows automatic flattening of the sigmoid model by predictive painting.

sfsigmoid has several parameters that control the model. The usual n1=, n2=, o1=, o2=, d1=, d2= parameters control the mesh size and sampling, taper= indicates whether to taper the sides of the model, large= controls the length of the synthetic reflectivity series. The program takes no input.

10 previous programs of the month:

High-performance computing and open-source software

October 8, 2014 Links No comments

A recent Report on High Performance Computing by the US Secretary of Energy Advisory Board contains a bizarre section on open source software, which states

There has been very little open source that has made its way into broad use within the HPC commercial community where great emphasis is placed on serviceability and security.

In his thoughtful blog post in response to this report, Will Schroeder, the CEO an co-founder of the legendary Kitware Inc. makes a number of strong points defending the role of open source in the past and future development of HPC. He concludes

The basic point here is that issues of scale require us to remove inefficiencies in researching, deploying, funding, and commercializing technology, and to find ways to leverage the talents of the broader community. Open source is a vital, strategic tool to do this as has been borne out by the many OS software systems now being used in HPC application… It’s easy to overlook open source as a vital tool to accomplish this important goal, but in a similar way that open source Linux has revolutionized commercial computing, open source HPC software will carry us forward to meet the demands of increasingly complex computing systems.

See also Will Schroeder’s presentation The New Scientific Publishers at SciPy-2013.

Program of the month: sfmax1

September 24, 2014 Programs No comments

sfmax1 finds local maxima along the first axis of the input. It takes floating-point input but outputs complex numbers, where the real part stands for the location of the local minima and the imaginary part stands for the value of the input at local minima.

The number of minima to output is controlled by np= parameter. To control the range for the minima locations (in the case that it is smaller than the full range of the data), use min= and max=. The output is sorted by value so that the largest maxima appear first. Here is a quick example. Let us create some data:

bash$ sfmath n1=5 output="sin(x1)" > data.rsf 
bash$ bash$ < data.rsf sfdisfil
   0:             0       0.8415       0.9093       0.1411      -0.7568

Observing the data values, we can suspect that the local maximum is between 1 and 2.

bash$ < data.rsf sfmax1 np=1 | sfdisfil
   0:      1.581,    0.9826i

sfmax1 uses local parabolic interpolation to locate the minimum at 1.581 with the value of 0.9826.

In the following example, from tccs/flat/flat, sfmax1 is used to locate the strongest-amplitude horizons for predictive painting.

10 previous programs of the month:

Tutorial on data slicing

August 20, 2014 Examples No comments

The example in rsf/tutorials/slicing reproduces the tutorial from Evan Bianco of simple data slicing.

See also:

Madagascar users are encouraged to try improving the results.

Iterative deblending using shaping regularization

August 20, 2014 Documentation No comments

A new paper is added to the collection of reproducible documents:
Iterative deblending of simultaneous-source seismic data using seislet-domain shaping regularization

We introduce a novel iterative estimation scheme for separation of blended seismic data from simultaneous sources. The scheme is based on an augmented estimation problem, which can be solved by iteratively constraining the deblended data using shaping regularization in the seislet domain. We formulate the forward modeling operator in the common receiver domain, where two sources are assumed to be blended using a random time-shift dithering approach. The nonlinear shaping-regularization framework offers some freedom in designing a shaping operator to constrain the model in an underdetermined inverse problem. We design the backward operator and the shaping operator for the shaping regularization framework. The backward operator can be optimally chosen as a half of the identity operator in the two-source case, and the shaping operator can be chosen as coherency-promoting operator. Three numerically blended synthetic datasets and one numerically blended field dataset demonstrate the high-performance deblending effect of the proposed iterative framework. Compared with alternative f-k domain thresholding and f-x predictive filtering, seislet-domain soft thresholding exhibits the most robust behavior.

Second Madagascar Working Workshop

August 5, 2014 Celebration No comments

Working workshops” as opposed to “talking workshops” are meetings where the participants work together (possibly divided into pairs or small teams) to develop new software code or to conduct computational experiments addressing a particular problem. Working workshops are a cross between scientific workshops and coding sprints or hackathons common among open-source software communities.

26 participants from 11 different organizations gathered at Rice University at the end of July and beginning of August for the Second Madagascar Working Workshop, hosted by The Rice Inversion Project. The topic of the workshop was parallel high-performance computing. The participants divided into teams of 2-3 people by pairing experienced Madagascar developers with novice users. Each team worked on a small project, creating examples of parallel computing or improving general-purpose tools such as sfmpi, sfomp, and (newly created) sfbatch.

The participants used Stampede, the world’s seventh most powerful supercomputer, provided by the Texas Advanced Computing Center, for their computational experiments.