Equipped with the general concepts on time-warping and the choice of 3D GMA described in the previous sections, we propose to solve this moveout inversion problem with a global Monte Carlo inversion method. This choice is encouraged by the non-uniqueness of the solution of this inverse problem that only relies on traveltime (kinematic) data and the fact that the 3D GMA depends non-linearly on associated moveout parameters. By employing a global optimization scheme within a Bayesian framework, we can map out the non-uniqueness of possible solutions and present them as posterior probability distributions, while also honoring the non-linearity of the problem. Here, we largely follow the notations of Mosegaard and Tarantola (1995) and Tarantola (2005), where we express the posterior probability density function as
denotes the prior probability density function of model parameters and represents the likelihood function (with an appropriate normalization constant ) that measures the degree of fit between predicted data and observed data. Assuming a prior that we can draw samples from, the goal is to design a selection process based on that will result in samples whose density directly represents .Mosegaard and Tarantola (1995) showed that given samples of and a choice of , the selection process can be done by means of the Metropolis-Hastings algorithm. In this study, we define as:
where for each , the misfit between the observed and modeled for all traces in the considered CMP gather is evaluated and summed according to equation 7. Here, represents a `hyperparameter' related to data uncertainty, which we choose to express as with denoting the magnitude of uncertainty expressed in percent with respect to the root mean square (RMS) of the data . Because is constant for each CMP event, can also be considered as a relative uncertainty in the estimated reflection traveltime squared (equation 2).We emphasize that data uncertainty is generally not easily quantified in practice due to the workflow of estimating . To account for this complication, we propose to utilize the transdimensional (hierarchical Bayesian) inversion framework (Sambridge et al., 2006; Malinverno and Briggs, 2004; Sambridge et al., 2013), which permits us to treat as yet another parameter to be estimated during the inversion. In this way, we obviate the need to estimate the data uncertainty explicitly, while correctly honoring its effects in our inversion. Our implementation of the Monte Carlo inversion with Metropolis-Hastings rule can, therefore, be summarized as follows:
In our workflow, we propose to perform moveout parameter estimation with two inversion runs:
After the two runs, the final recorded can be visualized as histograms that represent 1D marginal posterior probability distributions of the estimated moveout parameters . The corresponding posterior probability density function can subsequently be obtained with appropriate normalization of the histograms. In order to quantify the `gain of information' from the moveout inversion process, we use the Kullback-Leibler divergence given by,
(9) |
Even though is generally unbounded, it is possible to compute a benchmark value for some simple case. For example, if we consider Gaussian prior and posterior distributions, the Kullback-Leibler divergence () can be expressed as
(10) |
(11) |