next up previous [pdf]

Next: Dot-product test for validity Up: ADJOINT DEFINED: DOT-PRODUCT TEST Previous: ADJOINT DEFINED: DOT-PRODUCT TEST

Definition of a vector space

An operator transforms a space to another space. Examples of spaces are model space $\bold m$ and data space $\bold d$. We think of these spaces as vectors with components packed with numbers, either real or complex numbers. The important practical concept is that not only does this packing include one-dimensional spaces like signals, two-dimensional spaces like images, 3-D movie cubes, and zero-dimensional spaces like a data mean, etc., but spaces can be mixed sets of 1-D, 2-D, and 3-D objects. One space that is a set of three cubes is the Earth's magnetic field, which has three components, each component being a function of three-dimensional physical space. (The 3-D physical space we live in is not the abstract vector space of models and data so abundant in this book. In this book the word ``space'' without an adjective means the `` vector space.'') Other common spaces are physical space and Fourier space.

A more heterogeneous example of a vector space is data tracks. A depth-sounding survey of a lake can make a vector space that is a collection of tracks, a vector of vectors (each vector having a different number of components, because lakes are not square). This vector space of depths along tracks in a lake contains the depth values only. The $(x,y)$-coordinate information locating each measured depth value is (normally) something outside the vector space. A data space could also be a collection of echo soundings, waveforms recorded along tracks.

We briefly recall information about vector spaces found in elementary books: Let $\alpha$ be any scalar. Then, if $\bold d_1$ is a vector and $\bold d_2$ is conformable with it, then other vectors are $\alpha \bold d_1$ and $\bold d_1 + \bold d_2$. The size measure of a vector is a positive value called a norm. The norm is usually defined to be the dot product (also called the $L_2$ norm), say $\bold d \cdot \bold d$. For complex data it is $\bar{\bold{d}} \cdot \bold{d}$, where $\bar{\bold{d}}$ is the complex conjugate of $\bold{d}$. A notation that does transpose and complex conjugate at the same time is $\bold d\T \bold d$. In theoretical work, the ``size of a vector'' means the vector's norm. In computational work the ``size of a vector'' means the number of components in the vector.

Norms generally include a weighting function. In physics, the norm generally measures a conserved quantity like energy or momentum; therefore, for example, a weighting function for magnetic flux is permittivity. In data analysis, the proper choice of the weighting function is a practical statistical issue, discussed repeatedly throughout this book. The algebraic view of a weighting function is that it is a diagonal matrix with positive values $w(i)\ge 0$ spread along the diagonal, and it is denoted $\bold W = {\bf diag}[w(i)]$. With this weighting function, the $L_2$ norm of a data space is denoted $\bold d\T \bold W \bold d$. Standard notation for norms uses a double absolute value, where $\vert\vert\bold d\vert\vert=\bold d\T \bold W \bold d$. A central concept with norms is the triangle inequality, $ \vert\vert\bold d_1 + \bold d_2\vert\vert \le \vert\vert\bold d_1\vert\vert + \vert\vert\bold d_2\vert\vert $ a proof you might recall (or reproduce with the use of dot products).


next up previous [pdf]

Next: Dot-product test for validity Up: ADJOINT DEFINED: DOT-PRODUCT TEST Previous: ADJOINT DEFINED: DOT-PRODUCT TEST

2014-09-27