Before we look at the concept of statistical distance, it is worth pointing out first what is meant by distance in mathematics.

 

Definition
Suppose that V is a non-empty set and d is a function from V×V to \mathbb{R}. Then d is a distance function or a metric if for every p, q, r ∈ V the following conditions hold:

  1. d(p,q) ≥ 0
  2. d(p,q) = 0 ⇔ p = q
  3. d(p,q) = d(q,p)
  4. d(p,q) ≤ d(p,r) + d(r,q)

For instance, in \mathbb{R} we can define a distance function d where d(p,q) = \sqrt{(p-q)^2} for every p, q ∈ \mathbb{R}. (Equivalently, d(p,q)= |p-q|). It can be shown that for every p, q, r ∈ \mathbb{R} the following hold: 1) |p-q| \geq 0, 2) |p-q| = 0 \Leftrightarrow p = q, 3) |p-q| = |q-p|, and 4) |p-q| \leq |p-r| + |r-q|.

As another example, in \mathbb{R}^2 we can define a distance function as follows. The distance between A(a1,a2) and B(b1,b2) is d{A,B ) = \sqrt{(a_1-b_1)^2+(a_2-b_2)^2}. It can be proved that d as defined in this fashion also satisfies the four conditions 1), 2), 3), and 4) above.

The distance function in \mathbb{R}^2 above can be extended to \mathbb{R}^p where the distance between A(a_1,a_2, \cdots ,a_p) and B(b_1,b_2, \cdots ,b_p) is defined by D(A,B)=\sqrt{(a_1-b_1)^2+( a_2-b_2)^2+ \cdots + (a_p-b_p)^2}.. It can be proved that D satisfies the four conditions 1), 2), 3), and 4) above.

In statistics, quantitative data can be plotted on a coordinate plane. Univariate data can be plotted on an axis (e.g. the x-axis or the real number line). Bivariate data can be plotted on a coordinate plane with two axes perpendicular to each other (e.g. the x and y axes on the xy-plane). As an example of bivariate data, consider the following sample data.

The data can be plotted on the xy-plane as follows.

Figure 1

The figure above demonstrates how bivariate data are plotted on the xy-plane. For bivariate data, such presentation is possible. But we cannot do so for data with more than 2 variates. Therefore, rather than stating that every bivariate data can be plotted on the xy-plane, it will be more fruitful to assert that there is a one-to-one correspondence between bivariate data with their coordinate vectors  relative to an orthonormal basis for  \mathbb{R}^2. By asserting the relationship this way, we can generalize it to the data with more than 2 variates as follows: “There is a one-to-one correspondence between p-variate data with their coordinate vectors relative to an orthonormal basis for \mathbb{R}^p.”

Now, how to define the statistical distance referred to in this post? The distance between two points in \mathbb{R}^p (i.e. between two data with p variates) defined by D(A,B)=\sqrt{(a_1-b_1)^ 2+(a_2-b_2)^2+ \cdots + (a_p-b_p)^2} as above does not take into account the variance of each variable and does not take into account the covariance between the variables. On the contrary, statistical distance “compensates” for the variance and covariance in the multivariate data. Moreover, the statistical distance d between the data vectors \vec{x} and \vec{y} is defined as follows.

d(\vec{x},\vec{y})= \sqrt{(\vec{x}-\vec{y})'A(\vec{x}-\vec{y})}

where

\vec{x}= (x_1 \quad x_2 \quad \cdots \quad x_p)'

\vec{y}= (y_1 \quad y_2 \quad \cdots \quad y_p)'

A = a positive definite symmetric matrix of order p

The matrix A above defines a statistical distance. In the principal component analysis, A is a variance-covariance matrix.

 

Example 1
Given a positive definite matrix A = \begin{pmatrix}9 & -2 \\ -2 & 6 \end{pmatrix}, find the statistical distance between K(2,1) and L(-1,0).

Answer
Let \vec{x}= \begin{pmatrix}2 \\ 1 \end{pmatrix} and \vec{y}= \begin{pmatrix}-1 \\ 0 \end {pmatrix}. Accordingly, \vec{x} - \vec{y} = \begin{pmatrix}2 \\ 1 \end{pmatrix} - \begin{pmatrix}-1 \\ 0 \end{pmatrix} = \begin {pmatrix}3 \\ 1 \end{pmatrix} and (\vec{x} - \vec{y})' = (3 \quad 1). From how the statistical distance is defined, d(\vec{x},\vec{y})= \sqrt{(\vec{x}-\vec{y})'A(\vec{x}-\vec{ y})}, we get:

Thus, the statistical distance between K and L is 3 \sqrt{5}.

 

The Shape of “Circle” by Statistical Distance
In general, a circle is defined as a set of all points that are equidistant from a certain fixed point. The fixed point is called the center of the circle and the equal distance is called the radius of the circle. It follows from the definition that the resulting circle depends on the domain of the distance function and the way how we define the distance function. To illustrate this, let the distance function d be defined on \mathbb{R}^2 such that the distance between A(a1,a2) ∈ \mathbb{R}^2 and B(b1,b2) ∈ \mathbb{R}^2 is d(A,B) = \sqrt{(a_1 - b_1)^2+(a_2 - b_2)^2}. By this definition, the shape of a circle with center O and radius 1 is as follows.

Figure 2

 

But how would the circle look like if the statistical distance was applied? Look at the following example.

 

Example 2
Find the equation of a circle with center O and radius 1 if the statistical distance is applied with A = \begin{pmatrix}9 & -2 \\ -2 & 6 \end{pmatrix}. Sketch the graph of the circle.

Answer
A circle with center O and radius 1 satisfies the equation d^2(\vec{x},O)=1. If \vec{x}= \begin{pmatrix}x_1 \\ x_2 \end{pmatrix} then the equation can be expressed as follows:

The locus of the points with the equation 9 {x_1}^2 - 4 x_1 x_2 + 6 {x_2}^2 = 1 is sketched as follows.

Figure 3

 

Note that the resulting circle has the form of an ellipse.

 

How to determine the directions and the length of the major and minor axes of the ellipse if the statistical distance function is given? The answer can be inferred from Theorem 1 below.

 

Theorem 1
If A is a positive definite symmetric matrix with spectral decomposition A = \sum_{i=1}^p \lambda_i \vec{e}_i \vec{e}_i \: ' then:

  1. the set of points at a distance of c from the origin O has the equation \vec{x} \: ' A \vec{x} = c^2, which is equivalent to \sum_{i =1}^p \lambda_i (\vec{x} \: ' \cdot \vec{e}_i)^2 = c^2,
  2. \vec{x}= \frac{c}{\sqrt{\lambda_i}} \vec{e}_i satisfies \sum_{i=1}^p \lambda_i (\vec{x} \: ' \cdot \vec{e}_i)^2 = c^2 ; i = 1, 2, …, p, and
  3. the \vec{e}_i ‘s, where i = 1, 2, 3, …, p, are the direction vectors of the axes of the hyperellipsoids \sum_{i=1}^p \lambda_i (\vec{x} \: ' \cdot \vec{e} _i)^2 = c^2.

 

In Example 2 above, A = \lambda_1 \vec{e}_1 \cdot \vec{e}_1 \: ' + \lambda_2 \vec{e}_2 \cdot \vec{e}_2 \: ' where λ1 = 10, \vec{e}_1 = \begin{pmatrix}2/\sqrt{5} \\ -1/\sqrt{5} \end{pmatrix}, λ2 = 5, \vec{e}_2 = \begin{pmatrix}1/\sqrt{5} \\ 2/\sqrt{5} \end{pmatrix}. Half the length of the axis of the ellipse in the \vec{e}_1 direction is \frac{1}{\sqrt{10}} and half the length of the axis of the ellipse in the \vec{e}_2 direction is \frac{1}{\sqrt{5}}. This situation is depicted as follows.

Figure 3

 

As Figure 3 shows, |\overrightarrow{OV}| = \frac{1}{\sqrt{10}} and |\overrightarrow{OW}| = \frac{1}{\sqrt{5}}. As a consequence, the minor axis is \frac{2}{\sqrt{10}} in length and has the same direction as \vec{e}_1 determined above. On the other hand, the major axis is \frac{2}{\sqrt{5}} in length and has the same direction as \vec{e}_2.

 

Example 3
In a bivariate population, a statistical distance is defined by the variance-covariance matrix \Sigma = \begin{pmatrix}73 & -36 \\ -36 & 52 \end{pmatrix}.

  1. Find the distance from any point with the coordinates (x1,x2) to the origin O in the form of d= \sqrt{a{x_1}^2 + bx_1 x_2 + c{x_2}^2}.
  2. Find the distance between T(3 - 2 \sqrt{3},4+ \frac{3 \sqrt{3}}{2}) and O.
  3. Let the distance of T from O be c. Find the equation of the ellipse corresponding to the locus of the points that are at a distance of c from O and sketch the ellipse.
  4. Let the spectral decomposition of Σ be \Sigma = \lambda_1 \vec{e}_1 \cdot \vec{e}_1 \: ' + \lambda_2 \vec{e}_2 \cdot \vec{e}_2 \: ' with λ1 > λ2. Determine and draw the new coordinate axes \tilde{x}_1 and \tilde{x}_2 on conditions that the direction vector of the \tilde{x} _1 axis is \vec{e}_1 and the direction vector of \tilde{x}_2 axis is \vec{e}_2.
  5. Express the equation of the ellipse in part 3 of this example in \tilde{x}_1 and \tilde{x}_2.
  6. Determine the coordinates of T relative to the ordered basis \{\vec{e}_1,\vec{e}_2 \}.
  7. Let the coordinates of T in part 6 of this example be (k1,k2). Verify that \tilde{x}_1 = k_1 and \tilde{x}_2 = k_2 satisfy the equation of the ellipse in part 5.

 

Answer to part 1

Answer to part 2

Substituting x_1 = 3 - 2 \sqrt{3} and x_2 = 4 + \frac{3 \sqrt{3}}{2} into d as obtained in the answer to part 1, we have the following.

Therefore, the distance from T to O is 50.

 

Answer to part 3

The desired ellipse equation is:

The characteristic equation of Σ is:

This yields the eigenvalues ​​ λ1 = 100 and λ2 = 25.

λ1 = 100 gives the eigenvector \vec{e}_1 = \begin{pmatrix}0.8 \\ -0.6 \end{pmatrix}.

λ2 = 25 gives the eigenvector \vec{e}_2 = \begin{pmatrix}0.6 \\ 0.8 \end{pmatrix}.

According to Theorem 1, \vec{e}_1 and \vec{e}_2 are the direction vectors of the axes of the ellipse. From part 2 of Theorem 1, it can be inferred that half the length of the axis in the \vec{e}_1 direction is \frac{50}{\sqrt{100}}=5 and half the length of the axis in the \vec{e}_2 direction is \frac{50}{\sqrt{25}}=10. This situation can be depicted as follows.

Figure 4

Answer to part 4

Figure 5

Answer to part 5

To express the ellipse equation in \tilde{x}_1 and \tilde{x}_2, we diagonalize Σ. If P = ( \vec{e}_1 \quad \vec{e}_2) then the diagonal matrix produced is P' \Sigma P.

In this case, P = \begin{pmatrix}0.8 & 0.6 \\ -0.6 & 0.8 \end{pmatrix}. Therefore,

Thus, the required equation is 100 {\tilde{x}_1}^2 + 25 {\tilde{x}_2}^2 = 2500, which is equivalent to \frac{{\tilde{x}_1}^2}{25}+\frac{{\tilde{x}_2}^2}{100}=1.

 

Answer to part 6

To determine the coordinates of T relative to B' = \{\vec{e}_1,\vec{e}_2 \} which is used as the ordered basis for the \tilde{x}_1  \tilde{x}_2-plane, we can use the formula {\left[ T \right]}_{B'} = P^{-1} {\left[ T \right]}_B where B = \{\begin{pmatrix}1 \\ 0 \end{pmatrix},\begin{pmatrix}0 \\ 1 \end{pmatrix} \}. Here {\left[ T \right]}_{B'} is the coordinate matrix of T relative to basis B’ and {\left[ T \right]}_B is the coordinate matrix of T relative to basis B.

Accordingly, the coordinates of T relative to the ordered basis \{\vec{e}_1,\vec{e}_2 \} is (T)_{B'}=(\frac{- 5 \sqrt{3}}{2},5).

 

Answer to part 7

Substitute \tilde{x}_1 = \frac{-5 \sqrt{3}}{2} and \tilde{x}_2 = 5 into the equation 100 {\tilde{x}_1}^2 + 25 {\tilde{x}_2}^2 = 2500. This results in the following.

2500 = 2500 (a true statement)

So, \tilde{x}_1 = \frac{-5 \sqrt{3}}{2} and \tilde{x}_2 = 5 satisfy the ellipse equation in part 5. (See Figure 6 below.)

Figure 6

 

Example 4
A random vector \vec{X}= \begin{pmatrix}X_1 \\ X_2 \end{pmatrix} has a bivariate normal density f(\vec{x}) = \frac{1}{2 \pi | \Sigma |^{1/2}} e^{-\frac{1}{2} \vec{x} \: ' \Sigma^{-1} \vec{x}} with \Sigma = \begin{pmatrix} 9 & -2 \\ -2 & 6 \end{pmatrix}. Sketch a constant density ellipse \vec{x} \: ' \Sigma^{-1} \vec{x} = 1. Find the length of the major and minor axes of the ellipse. Determine its principal components.

Answer
It can be shown that the spectral decomposition of Σ is \Sigma = \lambda_1 \vec{e}_1 \cdot \vec{e}_1 \: ' + \lambda_2 \vec{e}_2 \cdot \vec{e}_2 \: ' with λ1 = 10, \vec{e}_1 = \begin{pmatrix}2/\sqrt{5} \\ -1/\sqrt{5} \end{pmatrix}, λ2 = 5, \vec{e}_2 = \begin{pmatrix}1/\sqrt{5} \\ 2/\sqrt{5} \end{pmatrix}.

Consequently, the spectral decomposition of Σ-1 is:

By part 2 of Theorem 1, it can be concluded that half the length of the axis of the ellipse in the \vec{e}_1 direction is \frac{1}{\sqrt{\frac{1}{10}} } = \sqrt{10} and half the length of the axis of the ellipse in the \vec{e}_2 direction is \frac{1}{\sqrt{\frac{1}{ 5}}} = \sqrt{5}. Consequently, the length of the major and minor axes are 2 \sqrt{10} and 2 \sqrt{5}, respectively.

The first principal component: Y_1 = \vec{e}_1 \: ' \cdot \vec{X} = \frac{2}{\sqrt{5}} X_1 - \frac{1}{\sqrt{5 }} X_2

The second principal component: Y_2 = \vec{e}_2 \: ' \cdot \vec{X} = \frac{1}{\sqrt{5}} X_1 + \frac{2}{\sqrt{5 }} X_2

Example 4 illustrates an application of the generally accepted theorem on a multivariate population involving p variates as follows.

Theorem 2
If the random vector \vec{X}=(X_1 \quad X_2 \quad \cdots \quad X_p)' has a multivariate normal distribution with the mean \vec{\mu} and the covariance matrix Σ then the density of \vec{X} is constant on the \vec{\mu} centered ellipsoids (\vec{x} - \vec{\mu})' \Sigma^{-1} (\vec{x} - \vec{\mu}) = c^2 which have axes (\pm c \sqrt {\lambda_i}) ​​\vec{e}_i, i = 1, 2, …, p, where the ( \lambda_i , \vec{e}_i ) are the eigenvalue-eigenvector pairs ​​of Σ.

Leave a Reply

Your email address will not be published. Required fields are marked *