Next: The meaning of the
Up: Preconditioning
Previous: THE PRECONDITIONED SOLVER
Recall the fitting goals (10)
with weights
being absorbed into the operator
and the data
.
![\begin{displaymath}\begin{array}{llllllcl} \bold 0 &\approx& \bold r_d &=& \bold...
...\bold r_m &=& \bold A \bold m &=& \bold I & \bold p \end{array}\end{displaymath}](img111.png) |
(17) |
Without preconditioning, we have the search direction:
![$\displaystyle \Delta \bold m_{\rm bad} \quad =\quad \left[ \begin{array}{cc} \b...
...ray} \right] \left[ \begin{array}{c} \bold r_d \ \bold r_m \end{array} \right]$](img112.png) |
(18) |
and with preconditioning, we have the search direction:
![$\displaystyle \Delta \bold p_{\rm good} \quad =\quad \left[ \begin{array}{cc} (...
...ray} \right] \left[ \begin{array}{c} \bold r_d \ \bold r_m \end{array} \right]$](img113.png) |
(19) |
The essential feature of preconditioning is not that we perform
the iterative optimization in terms of the variable
.
The essential feature is that we use a search direction
that is a gradient with respect to
not
.
Using
, we have
,
which enables us to define a good search direction in
space.
![$\displaystyle \Delta \bold m_{\rm good} \quad =\quad \bold A^{-1} \Delta \bold ...
...quad \bold A^{-1} (\bold A^{-1})\T \bold F\T \bold r_d + \bold A^{-1} \bold r_m$](img118.png) |
(20) |
Define the gradient by
, and
notice that
.
![$\displaystyle \Delta \bold m_{\rm good} \quad =\quad \bold A^{-1} (\bold A^{-1})\T \bold g + \bold m$](img120.png) |
(21) |
The search direction (21)
shows a positive-definite operator scaling the gradient.
Each component of any gradient vector is independent of each other.
All independently point (negatively) to a direction for descent.
Obviously, each can be scaled by any positive number.
Now, we have found that we can also scale a gradient vector by
a positive definite matrix, and we can still expect
the conjugate-direction algorithm to descend, as always,
to the ``exact'' answer in a finite number of steps.
The reason is that modifying the search direction with
is equivalent to solving
a conjugate-gradient problem in
.
We'll see in Chapter
, that
our specifying
amounts to us specifying
a prior expectation
of the spectrum of the model
.
Subsections
Next: The meaning of the
Up: Preconditioning
Previous: THE PRECONDITIONED SOLVER
2015-05-07