Shaping regularization

Fomel (2009,2007) introduces shaping regularization in inversion problem, which regularizes the under-determined linear system by mapping the model to the space of acceptable models. Consider a linear system given as $\mathbf{Fx = \mathbf{b}}$, where $\mathbf{F}$ is the forward-modeling map, $\mathbf{x}$ is the model vector, and $\mathbf{b}$ is the data vector. Tikhonov regularization method amounts to minimize the least square problem bellow (Tikhonov, 1963):

$\displaystyle \mathbf{min} \Vert\mathbf{Fx}-\mathbf{b}\Vert^2+\epsilon^2\Vert\mathbf{Dx}\Vert^2,$ (28)

where $\mathbf{D}$ is the regularization operator, and $\epsilon$ is a scalar parameter. The solution for equation 28 is:

$\displaystyle \mathbf{\hat{x}} = \left(\mathbf{F}^T\mathbf{F}+\epsilon^2 \mathbf{D}^T\mathbf{D}\right)^{-1}\mathbf{F}^T\mathbf{b},$ (29)

Where $\mathbf{\hat{x}}$ is the least square approximated of $\mathbf{x}$, $\mathbf{F}^T$ is the adjoint operator. If the forward operator $\mathbf{F}$ is simply the identity operator, the solution of equation 29 is the form below:

$\displaystyle \mathbf{\hat{x}} = \left(\mathbf{I}+\epsilon^2 \mathbf{D}^T\mathbf{D}\right)^{-1}\mathbf{b},$ (30)

which can be viewed as a smoothing process. If we let:

$\displaystyle \mathbf{S} = \left(\mathbf{I}+\epsilon^2 \mathbf{D}^T\mathbf{D}\right)^{-1},$ (31)


$\displaystyle \epsilon^2 \mathbf{D}^T\mathbf{D} = \mathbf{S}^{-1} - \mathbf{I}.$ (32)

Substituting equation  32 into equation  29 yields a solution by shaping regularization:

$\displaystyle \mathbf{\hat{x}} = \left(\mathbf{F}^T\mathbf{F}+\mathbf{S}^{-1}-\...
...ft(\mathbf{F}^T\mathbf{F}-\mathbf{I}\right)\right]^{-1}\mathbf{SF}^T\mathbf{b}.$ (33)

The forward operator $\mathbf{F}$ may has physical units that require scaling. Introducing scaling $\lambda$ into $\mathbf{F}$, equation 33 be written as:

$\displaystyle \mathbf{\hat{x}} = \left[\lambda^2\mathbf{I} + \mathbf{S}\left(\m...
...f{F}^T\mathbf{F}-\lambda^2\mathbf{I}\right)\right]^{-1}\mathbf{SF}^T\mathbf{b}.$ (34)

If $\mathbf{S}=\mathbf{HH}^T$ with square and invertible $\mathbf{H}$. Equation  34 can be written as:

$\displaystyle \mathbf{\hat{x}} = \mathbf{H}\left[\lambda^2\mathbf{I} + \mathbf{...
...bda^2\mathbf{I}\right)\mathbf{H}\right]^{-1}\mathbf{H}^T\mathbf{F}^T\mathbf{b}.$ (35)

The conjugate gradient algorithm can be used for the solution of the equation 35.