From the algorithmic viewpoint of the CG method,
the IRLS algorithm can be considered as an LS method,
but with its operator, , modified by the weights, and .
The only change in the problems to solve that distinguishes the IRLS algorithm
from the LS one is the substitution of
and
for and , respectively.
Since the weights and are functions of the residual and the model, respectively,
and the residual and the model are changing during the iteration,
the problem that IRLS method solves is a nonlinear problem.
Therefore, the IRLS method obtains the -norm solution at the cost of nonlinear implementation.
I propose another algorithm that obtains -norm solution
without breaking the linear inversion template.
Instead of modifying the operator which results in nonlinear inversion,
we can choose a way to guide the search
to find the minimum
-norm solution in a specific model subspace
so as to obtain a solution that meets a user's specific criteria.
The specific model subspace could be
guided by a specific
-norm's gradient
or constrained by an a priori model.
Such guiding of the model vector can be realized by
weighting the residual vector or gradient vector in the CG algorithm.
Since the weights are basically changing the direction of the gradient vector in the CG algorithm,
this proposed algorithm is named as Conjugate Guided Gradient (CGG) method.