Month: March 2016

Program of the month: sflinear

March 23, 2016 Programs No comments

sflinear performs 1-D linear interpolation of irregularly spaced data.

The following example from rsf/su/rsflab4 shows a linearly interpolated velocity profile:

The input to sflinear contains coordinate-value pairs arranged so that the second dimension of the data is n2=2. The output contains regularly sampled values on the specified grid.

If the input coordinates are not in order and need sorting, use sort=y.

The output grid can be specified either by supplying it in a pattern file pattern= or by specifying the usual parameters n1=, o1=, d1=.

If the number of iterations specified by niter= is greater than zero, sflinear switches from simple linear interpolation to iterative interpolation by shaping regularization, which can produce a smooth output. The additional parameters to control this process are nw= (size of the local Lagrange interpolation filter for forward interpolation) and rect= (smoothing radius for shaping).

10 previous programs of the month:

Tutorial on semblance, coherence, and other discontinuity attributes

March 23, 2016 Examples No comments

The example in rsf/tutorials/semblance reproduces the tutorial from Joe Kington on semblance, coherence, and other discontinuity attributes. The tutorial was published in the December 2015 issue of The Leading Edge.

Madagascar users are encouraged to try improving the results.

Multiple realizations

March 17, 2016 Documentation No comments

Another old paper is added to the collection of reproducible documents: Multiple realizations using standard inversion techniques

When solving a missing data problem, geophysicists and geostatisticians have very similar strategies. Each use the known data to characterize the model’s covariance. At SEP we often characterize the covariance through Prediction Error Filters (PEFs) (Claerbout, 1998). Geostatisticians build variograms from the known data to represent the model’s covariance (Issaks and Srivastava, 1989). Once each has some measure of the model covariance they attempt to fill in the missing data. Here their goals slightly diverge. The geophysicist solves a global estimation problem and attempts to create a model whose covariance is equivalent to the covariance of the known data. The geostatistician performs kriging, solving a series of local estimation problem. Each model estimate is the linear combination of nearby data points that best fits their predetermined covariance estimate. Both of these approaches are in some ways exactly what we want: given a problem give me `the answer’…

Vplot figures and MS Word

March 16, 2016 Systems No comments

Joe Dellinger, the author of Vplot, suggests adjusting parameters for raster figures when including them in Word documents. He writes:

Wow, working on my SEG abstract I had a helluva time getting my vplot raster figures to look decent in word. Then I realized… wait a minute, it’s doing just the bad things plotters back in the 80’s were doing. I fiddled a little with pixc and greyc, and voila! Beautiful raster figures.

From the Vplot documentation:

  • pixc is used only when dithering is being performed, and also should only be used for hardcopy devices. It alters the grey scale to correct for pixel overlap on the device, which (if uncorrected) causes grey raster images to come out much darker on paper than on graphics displays.

  • greyc is used only when dithering is being performed, and really should only be used for hardcopy devices. It alters the grey scale so that grey rasters come out on paper with the same nonlinear appearance that is perceived on display devices.

The default values are pixc=1 greyc=1. The values used by Joe in his Word document were pixc=1.15 greyc=1.25.

To convert Vplot plots to other forms of graphics, you can use vpconvert.

See also:

National academies and reproducible research

March 14, 2016 Links No comments

A high-profile workshop Statistical Challenges in Assessing and Fostering the Reproducibility of Scientific Results was organized by the National Academies of Sciences and the National Science Foundation and took place in Washington (DC) last year. The workshop summary report was recently published by the National Academies Press.

Here is an extract, which lists recommendations from the panel discussion:

  • Establish publication requirements for open data and code. Journal editors and referees should confirm that data and code are linked and accessible before a paper is published. (Keith Baggerly)
  • Clarify strength of evidence for findings. The strength of evidence should be clearly stated for theories and results (in publications, press releases, etc.) to ensure that initial explorations are not misrepresented as being more conclusive than they actually are. (Keith Baggerly)
  • Align incentives. Communities need to examine how to build a culture that rewards researchers who put effort into verifying their own results rather than quickly rushing to publication (Marcia McNutt)
  • Improve training.
    • Institutions need to make extra efforts to instill students with an ethos of care and reproducibility. (Marcia McNutt)
    • Universities need to change the curriculum to incorporate topics such as version control, code review, and general data management, and communities need to revise their incentives to improve the chances of reproducible, trustworthy research in the future. Steps to improve the future workforce are necessary to keep the public trust of science. (Randy LeVeque)
    • Many graduates are well steeped in open-source software norms and ethics, and they are used to this as a normal way of operating. However, they come into a scientific research setting where codes are not shared, transparent, or open; instead, codes are being built or constructed in a way that feels haphazard to them. This training disconnect can interfere with mentorship and with their continuation in science. Better understanding of these norms is needed in all levels of research (Victoria Stodden)
    • Prevention and motivation need to be components of instilling the proper ethos. This could be part of National Institutes of Health (NIH)-mandated ethics courses. (Keith Baggerly)
  • Clarify terminology. A clearer set of terms is needed, especially for teaching students and creating guidelines and best practices. Some examples of how to do this can be found within the uncertainty quantification community, which successfully clarified the terms verification and validation that were almost used synonymously 10-15 years ago. (Ronald Boisvert)

The authors of these recommendations are: