next up previous [pdf]

Next: Lower-upper-middle filter Up: Choices of nonlinear filters Previous: Choices of nonlinear filters

Gaussian similarity-mean filter

The similarity-mean filter is a nonlinear filter that uses local correlation coefficients as desired weight coefficients (Liu et al., 2009a). It is described in Appendix A. We chose the shaping operator with smoothing radius of $10$ samples in time to calculate local similarity coefficients between the predicted data at each prediction step and original data (reference data). Figure 2a displays similarity-weight coefficients from local correlation. The elements with the shortest prediction distance get largest weights because they provide the most accurate prediction and therefore are most similar to the original image. We used zero-value boundary conditions for the prediction, so the predicted amplitudes from the left most and the right most sides are zero. This results in the similarity coefficients on the corners of the weight cube to be zero. In the weighted mean filter, large weight coefficients get selected when the similarity is strong between processed data and reference data. We introduce additionally Gaussian weights to localize the smoothing characteristics of the filter

\begin{displaymath}
w_i = e^{-h_i^2/{h_r^2}}\;,
\end{displaymath} (4)

where $h_i$ is the distance to trace $i$ and $h_r$ is the reference parameter that controls the shape of the weight function. This construction is analogous to bilateral or non-local filtering (Gilboa and Osher, 2008; Tomasi and Manduchi, 1998). Figure 2b shows the product of Gaussian weights and similarity weights. The prediction data volumes only with similarity weights applied and with Gaussian similarity weights applied are shown in Figure 2c and 2d respectively. After applying Gaussian and similarity weights, we stack the data in Figure 2d along the prediction direction. The result is shown in Figure 3b.

weight01 weightcube1 pwdatacube wdatacube
weight01,weightcube1,pwdatacube,wdatacube
Figure 2.
Similarity weights (a), product of Gaussian weights and similarity weights (b), the data only with similarity weights applied (c), and the data with Gaussian similarity weights applied (d).
[pdf] [pdf] [pdf] [pdf] [png] [png] [png] [png] [scons]

For comparison, we used the standard mean filter to process the prediction data volume (Figure 1d) along the prediction direction. The result is shown in Figure 3a and corresponds, in this case, to simple box smoothing along the local image structure. The standard mean filter simply stacks all information along the prediction direction. It enhances structural continuity but smears information across the fault.

For further discussion, we show the difference between the noisy image (Figure 1b) and structure-enhancing results with the standard mean filter and the Gaussian similarity-mean filter (Figure 3a and 3b). We kept the same scale of magnitude and plotting clips as that of the input image. From Figure 4a and 4b, the coherent events are well protected by the two methods because structure prediction can exactly predict coherent information. However fault information is destroyed by the mean filter (Figure 4a), while the similarity-mean filter provides a result where fault information is protected well, whereas random noise is attenuated (Figure 4b).

mean1 gsimilarstack1 mf median1
mean1,gsimilarstack1,mf,median1
Figure 3.
Structure-enhancing results using different methods. Standard mean filtering (a), similarity-mean filtering (b), standard median filtering with filter-window length $15$ (c), and lower-upper-middle (LUM) filtering with parameters $k=l=7$ (d).
[pdf] [pdf] [pdf] [pdf] [png] [png] [png] [png] [scons]


next up previous [pdf]

Next: Lower-upper-middle filter Up: Choices of nonlinear filters Previous: Choices of nonlinear filters

2013-07-26