Optimization strategy

To solve the optimization problem in equations 689 we employ a conjugate-gradients scheme:

$\displaystyle \begin{bmatrix}\mathbf{m}_{r}^{n+1} \\ \mathbf{m}_{d}^{n+1}\end{b...
...a_{n}
\begin{bmatrix}\mathbf{s}_{r}^{n-1}\\ \mathbf{s}_{d}^{n-1} \end{bmatrix},$ (12)

where $-\nabla J(\mathbf{m}_{r}^{n},\mathbf{m}_{d}^{n})$ is the gradient at iteration $n$, $[\mathbf{s}_{r}^{n}, \mathbf{s}_{d}^{n}]^T$ is a conjugate direction, which has two components to update both of the models, scalar $\alpha_{n}$ can be obtained by perfoming a line search, and $\beta_n$ is designed to guarantee that $[\mathbf{s}_{r}^{n}, \mathbf{s}_{d}^{n}]^T$ and $[\mathbf{s}_{r}^{n-1}, \mathbf{s}_{d}^{n-1}]^T$ are conjugate.

It is important to highlight that the same operator is used to update both images of reflections $\mathbf{m}_r$ and diffractions $\mathbf{m}_d$. The implication of this strategy is that without regularization same updates are attributed to both models. For simplicity, consider conjugate direction $\mathbf{s}^{n}$ at the zero iteration $n=0$. It has the form of $\mathbf{s}^{n=0}=[\mathbf{L}^T\mathbf{D}_{data}^T\mathbf{P}^T\mathbf{r},\ \mathbf{L}^T\mathbf{D}_{data}^T\mathbf{P}^T\mathbf{r}]^T$ (PWD ( $\mathbf {D}_{data}$) and path-summation integral ( $\mathbf {P}$) are disabled for objective functions in equations 89 correspondingly), where $\mathbf{r}$ corresponds to the residual between the observed data $\mathbf {d}$ and the data modeled from the initial guess model $[\mathbf{m}_{r}^{init},\ \mathbf{m}_{d}^{init}]^T$ (initialized by zeroes for the first inversion (equation 6) and by the output of the first inversion (equation 6) for optimization of the objective function in equation 8). The conjugate direction $\mathbf{s}^{n=0}$ is equal to the negative gradient of the objective function $-\nabla J(\mathbf{m}_{r}^{init},\mathbf{m}_{d}^{init})$. It is obvious that the residual $\mathbf{r}$ in the conjugate direction $\mathbf{s}^{n=0}$ is mapped to reflection and diffraction image using the same operator - the adjoint of the “chain”. The same is true for all of the iterations. In this case, previous directions $\mathbf{s}^{n-1} = [\mathbf{s}_r^{n-1},\ \mathbf{s}_d^{n-1}]^T$, which are also the same for both models, participate in the update. To perform reflection and diffraction separation regularization is required.


2019-07-17