Appendix C: Review of local similarity

We begin our review based on 1D vectors $\mathbf{a}$ and $\mathbf{b}$. A common way to measure the similarity between two signals is to calculate the correlation coefficient,

$\displaystyle c=\frac{\mathbf{a}^T\mathbf{b}}{\parallel \mathbf{a} \parallel_2 \parallel \mathbf{b} \parallel_2},$ (40)

where $c$ is the correlation coefficient, $\mathbf{a}^T\mathbf{b}$ denotes the dot product of $\mathbf{a}$ and $\mathbf{b}$. $\parallel \cdot \parallel_2$ denotes the $L_2$ norm of the input vector. The correlation coefficient $c$ can measure the correlation, or in other words similarity, between two vectors within the selected window in a global manner. However, in some specific applications, a local measurement for the similarity is more demanded. In order to measure the local similarity of two vectors, it is straightforwardly to apply local windows to the target vectors:

$\displaystyle c(i) = \frac{\mathbf{a}_i^T\mathbf{b}_i}{\parallel \mathbf{a}_i \parallel_2 \parallel \mathbf{b}_i \parallel_2},$ (41)

where $\mathbf{a}_i$ and $\mathbf{b}_i$ denote the localized vectors centered on the $i$th entry of input long vectors $\mathbf{a}$ and $\mathbf{b}$, respectively. The windowing is sometime troublesome, since the measured similarity is largely dependent on the windowing length and the measured local similarity might be discontinuous because of the separate calculations in windows. To avoid the negative performance caused by local windowing calculations, Fomel (2007) proposed an elegant way for calculating smooth local similarity via solving two inverse problems.

Let us first rewrite equation C-1 in a different form. Getting the absolute value of both sides of equation C-1, considering $\mathbf{a}^T\mathbf{b}=\mathbf{b}^T\mathbf{a}$, we have

$\displaystyle \vert c\vert=\sqrt{ \frac{(\mathbf{a}^T\mathbf{b})(\mathbf{b}^T\m...
...rt{\left\vert\frac{\mathbf{b}^T\mathbf{a}}{\mathbf{b}^T\mathbf{b}}\right\vert}
$ (42)

We let $c_1=\frac{\mathbf{a}^T\mathbf{b}}{\mathbf{a}^T\mathbf{a}}$ and $c_2=\frac{\mathbf{b}^T\mathbf{a}}{\mathbf{b}^T\mathbf{b}}$, equation C-3 turns to

$\displaystyle \vert c\vert=\sqrt{c_1c_2}.$ (43)

It is obvious that scalars $c_1$ and $c_2$ come from two least-squares inverse problem:

$\displaystyle c_1 = \arg \min_{\tilde{c}} \parallel \mathbf{a}-\mathbf{b}\tilde{c} \parallel_2^2$ (44)
$\displaystyle c_2 = \arg\min_{\tilde{c}} \parallel \mathbf{b}-\mathbf{a}\tilde{c} \parallel_2^2$ (45)

The local similarity attribute is based on equations C-5 and C-6, but is in a localized version:

$\displaystyle \mathbf{c}_1$ $\displaystyle =\arg\min_{\tilde{\mathbf{c}}_1}\Arrowvert \mathbf{a}-\mathbf{B}\tilde{\mathbf{c}}_1 \Arrowvert_2^2,$ (46)
$\displaystyle \mathbf{c}_2$ $\displaystyle =\arg\min_{\tilde{\mathbf{c}}_2}\Arrowvert \mathbf{b}-\mathbf{A}\tilde{\mathbf{c}}_2 \Arrowvert_2^2,$ (47)

where $\mathbf{A}$ is a diagonal operator composed from the elements of $\mathbf{a}$: $\mathbf{A}=diag(\mathbf{a})$ and $\mathbf{B}$ is a diagonal operator composed from the elements of $\mathbf{b}$: $\mathbf{B}=diag(\mathbf{b})$. Equations C-7 and C-8 are solved via shaping regularization

$\displaystyle \mathbf{c}_1$ $\displaystyle = [\lambda_1^2\mathbf{I} + \mathcal{T}(\mathbf{B}^T\mathbf{B}-\lambda_1^2\mathbf{I})]^{-1}\mathcal{T}\mathbf{B}^T\mathbf{a},$ (48)
$\displaystyle \mathbf{c}_2$ $\displaystyle = [\lambda_2^2\mathbf{I} + \mathcal{T}(\mathbf{A}^T\mathbf{A}-\lambda_2^2\mathbf{I})]^{-1}\mathcal{T}\mathbf{A}^T\mathbf{b},$ (49)

where $\mathbf{\mathcal{T}}$ is a smoothing operator, and $\lambda_1$ and $\lambda_2$ are two parameters controlling the physical dimensionality and enabling fast convergence when inversion is implemented iteratively. These two parameters can be chosen as $\lambda_1 = \Arrowvert\mathbf{B}^T\mathbf{B}\Arrowvert_2$ and $\lambda_2 = \Arrowvert\mathbf{A}^T\mathbf{A}\Arrowvert_2$ (Fomel, 2007).

After the $\mathbf{c}_1$ and $\mathbf{c}_2$ are solved, the localized version of $\vert c\vert$ is

$\displaystyle \mathbf{c} = \sqrt{\mathbf{c}_1\circ\mathbf{c}_2},$ (50)

where $\mathbf{c}$ is the output local similarity and $\circ$ denotes element-wise product.

To calculate the 2D and 3D versions of local similarity, as used in demonstrating the reconstruction performance in the main contents of the paper, one needs to first reshape a 2D matrix or a 3D tensor into a 1D vector and then to use the formula shown above to calculate the local similarity vector. Since the local similarity is of the same size of the input two vectors, it can be easily reshaped into the 2D or 3D form for display purposes.


2020-12-05