next up previous [pdf]

Next: Workflow Up: Method Previous: Regularization

Optimization

For inversion we adopt a conjugate gradients scheme (Fomel et al., 2007):

$\displaystyle \mathbf{m}_d^i \leftarrow \mathbf{H}_{\epsilon,N,K} \mathbf{T_{...
...g], \mathbf{s}_{j}=-\nabla J(\mathbf{m}_{d}^{j}) + \beta_{j}\mathbf{s}_{j-1}$ (5)

where $ \mathbf{H}_{\epsilon,N,K}$ and $ \mathbf {T_{\lambda }}$ are anisotropic-smoothing and thresholding operators, $ -\nabla J(\mathbf{m}_{d}^{j})$ is the gradient at iteration $ j$ , $ \mathbf{s}_{j}$ is a conjugate direction, $ \alpha_{j}$ is an update step length , and $ \beta_j$ is designed to guarantee that $ \mathbf{s}_j$ and $ \mathbf{s}_{j-1}$ are conjugate. After several internal iterations $ j$ of the conjugate gradient algorithm we generate $ \mathbf{m}_d^j$ , to which we apply thresholding to drop samples corresponding to noise with values lower than the threshold $ \lambda$ , and which we then smooth along edges by applying anisotropic smoothing operator $ \mathbf{H}_{\epsilon,N,K}$ . Outer model shaping iterations are denoted by $ i$ .

Inversion results also depend on the numbers of inner and outer iterations: their tradeoff determines how often shaping regularization is applied and therefore controls its strength. Regularization by early stopping can also be conducted. The optimization strategy with $ \mathbf{H}_{\epsilon,N,K}$ removed corresponds to the iterative thresholding approach (Daubechies et al., 2004).


next up previous [pdf]

Next: Workflow Up: Method Previous: Regularization

2021-02-24