A principal component analysis is concerned with explaining the variance-covariance structure of a set of variables through a few linear combinations of these variables. Hereinafter, each linear combination is referred to as a component. The number of components can be selected or set so that the total variance produced by these components is almost equal to the total variance of the original variables. Thus, the information in the components is almost as much as the information in the original variables. In addition, the components derived are orthogonal to each other. In other words, these components are not correlated with each other.
The resulting components are rarely treated as the ultimate objective in multivariate statistics. These components are often required when applying other multivariate statistical analysis such as multiple regression, cluster analysis, and factor analysis.
Suppose that the random vector has the covariance matrix Σ with eigenvalues λ1 ≥ λ2 ≥ … ≥ λp ≥ 0. Consider p linear combinations below.
Therefore,
Var(Yi) = ; i = 1, 2, …, p
Cov(Yi,Yk) = ; i, k = 1, 2, …, p
The principal components are those uncorrelated linear combinations Y1, Y2, …, Yp with the property that for every i ∈ {1, 2, …, p} Var(Yi) is as large as possible.
By the definition of Yi, the arising problem is that Var(Yi) can be made as large as possible by multiplying by some constant. To eliminate this indeterminacy, a new condition is added: must be a unit vector. Therefore the principal components are defined as follows.
First principal component = linear combination that maximizes subject to .
Second principal component = linear combination that maximizes subject to and .
At the i th step,
i th principal component = linear combination that maximizes subject to and for k < i.
Theorem 1
Let Σ be the covariance matrix associated with the random vector . Also suppose that Σ has the eigenvalue-eigenvector pairs where λ1 ≥ λ2 ≥ … ≥ λp ≥ 0. Then, the i th principal component is as follows:
= ei1X1 + ei2X2 + … + eipXp for i = 1, 2, …, p
Further consequences:
; i = 1, 2, …, p
; i ≠ k.
If some λi are equal, the choices of the corresponding coefficient vectors, (and hence Yi) are not unique.
Example
Suppose that the random vector has the covariance matrix below.
To determine the principal components, first calculate the eigenvalues and the corresponding eigenvectors. The eigenvectors are so selected such that their norms are 1. The eigenvalues (ordered from the largest to the smallest) and their corresponding eigenvectors are as follows.
According to Theorem 1, the principal components are:
Y1 = -0.229 X1 + 0.622 X2 + 0.197 X3 – 0.722 X4
Y2 = 0.861 X1 – 0.028 X2 – 0.328 X3 – 0.387 X4
Y3 = 0.447 X1 + 0.477 X2 + 0.618 X3 + 0.437 X4
Y4 = -0.075 X1 + 0.620 X2 – 0.687 X3 + 0.371 X4
Also, by Theorem 1:
Var(Y1) = λ1 = 71.224
Var(Y2) = λ2 = 31.511
Var(Y3) = λ3 =14.343
Var(Y4) = λ4 = 2.923
Theorem 2
Suppose that the random vector has the covariance matrix Σ with eigenvalue-eigenvector pairs and λ1 ≥ λ2 ≥ … ≥ λp ≥ 0. Let be the principal components. Then, the sum of the variances of X1, X2, …, Xp is equal to the sum of the variances of Y1, Y2, …, Yp.
Based on one of the consequences in Theorem 1, i.e. for i = 1 , 2, …, p, Theorem 2 deduces the following.
In the example above, = λ1 + λ2 + λ3 +λ4 = 71.224 + 31.511 + 14.343 + 2.923 = 120.001. The sum of the diagonal elements of matrix Σ is nothing but = σ11 + σ22 + σ33 + σ44 = 30 + 32 + 13 + 45 = 120. This is in accordance with the conclusion of Theorem 2.
Theorem 3
If are the principal components obtained from the covariance matrix Σ, then the correlation coefficient between the component Yi and the variable Xk is for i, k = 1, 2, …, p where are eigenvalue-eigenvector pairs of Σ.
As an example of how to apply Theorem 3, suppose that we are to find the correlation between Y4 and X1. From the equation Y4 = -0.075 X1 + 0.620 X2 – 0.687 X3 + 0.371 X4 we have e41 = -0.075. By applying Theorem 1, we have λ4 = 2.923. From the covariance matrix, we get σ11 = 30. Furthermore, by Theorem 3 we get . Similarly, the correlation between Y4 and X2 is .
To measure the importance of variable Xk in component Yi, some statisticians use eik while others use . One of the reasons for not using is that it only measures the univariate contribution of an individual X to a component Y. That is, they do not indicate the importance of an X to a component Y in the presence of the other X’s. In particular, Rencher in Johnson and Wichern (2002) recommends using eik instead of to interpret the components. However, Johnson and Wichern (2002) stated “Although coefficients and the correlations can lead to different rankings as measures of the importance of the variables to a given component, it is our experience that these rankings are often not appreciably different.” and they have recommended that both eik and be examined to help interpret the principal components.
At the beginning of this article, it was mentioned that principal component analysis produced new variables (called components) which were fewer than the original variables but retained as much of the total variance of the original variables as possible. By retaining most of the variability in the original variables, the resulting components can replace the old variables. This can be demonstrated as follows.
In the example above, suppose that we take only two components, namely Y1 and Y2. What fraction of the total variance of the original variables is retained by Y1 and Y2 altogether? The proportion of the total population variance retained by the first principal component, i.e. Y1, is = = = 59.35%. The proportion of the total population variance retained by the second component, Y2, is = = = 26.26%. As a consequence, if we use only two components to replace the original variables, the proportion of the total variance preserved by the two components is 59.35% + 26.26% = 85.61%. Thus, we can replace X1, X2, X3, X4 with two components Y1 and Y2. A consequence of this replacement is that most (85.61%) of the total variance is retained.
Reference
Johnson, R. A. & Wichern, D. W. (2002). Applied Multivariate Statistical Analysis (5th ed.). Pearson Education
International.