Model fitting by least squares |

(66) | |||

(67) | |||

(68) |

A linear combination in solution space, say , corresponds to in the conjugate space, the data space, because . According to equation (51), the residual is the modeled data minus the observed data.

(69) |

(70) |

The *gradient* vector is a vector with the same number
of components as the solution vector .
A vector with this number of components is:

The gradient in the transformed space is , also known as the

What is our solution update
?
It is some unknown amount of the gradient plus
another unknown amount of the previous step .
Likewise in residual space.

(75) | |||

(76) |

The minimization (56) is now generalized
to scan not only in a line with ,
but simultaneously another line with .
The combination of the two lines is a plane.
We now set out to find the location in this plane that minimizes the quadratic .

(78) | |||

(79) |

The many applications in this book all need to
find and with (81), and then
update the solution with (71) and
update the residual with (72).
Thus, we package these activities in a subroutine
named `cgstep()`.
To use that subroutine, we have a computation **template**
with
repetitive work done by subroutine `cgstep()`.
This template (or pseudocode) for minimizing the residual
by the conjugate-direction method is:

where the subroutine

iterate {

}

Model fitting by least squares |

2014-12-01