## PCA and data projection issue

### George Pavlidis (view profile)

on 29 Jan 2018 at 11:06
Latest activity Commented on by George Pavlidis

### George Pavlidis (view profile)

on 29 Jan 2018 at 18:37

I am using PCA to get identify the main data components and re-project the data to that space With X the input (row-wise MxN data), Y the output and D the required dimensions, typically this is done

```[~,Y] = pca( X );
Y = Y(:,1:D);
```

If I want to do this "manually" I would compute the covariance matrix, then the eigenvalues/vectors and multiply the data with the new coordinate system as follows:

```X = X-mean(X);          % 'center' the data around zero
A = (X'*X) / length(X); % compute the covariance matrix (normalised by the num of elements)
[V,~] = eig(A);         % compute the eigenvectors -- this results in a increasing ordering
V = fliplr(V);          % flip the eigenvectors so that the most significant come first
V = V(:,1:D);           % take only those eigenvectors required
Y = X * V;              % project the original data to the new coordinate system
```

Unfortunately the two above methods do not produce the same results! In particular, and this is the interesting part, some of the resulting values are equal and some have flipped signs! If I get the difference of the absolute values I get almost zero (in the order of 1e-14) for all matrix elements.

I tried even simple examples like the one presented here but I see the same issue.

Any ideas?

### John D'Errico (view profile)

on 29 Jan 2018 at 15:16
Edited by John D'Errico

### John D'Errico (view profile)

on 29 Jan 2018 at 15:17

Flipped signs is completely irrelevant.

An eigenvector is not unique, since you can multiply it by any constant and still have a valid eigenvector. A factor of -1 does not even impact the norm. So flipping the sign on it changes nothing, just that factor of -1.

A relative difference of 1e-14 is also irrelevant. Just floating point trash, because the computations were done in a different sequence. NEVER trust the least significant bits of a floating point number.

So, no, that is not the "interesting" part. In fact, nothing about what you have said is even remotely surprising.

George Pavlidis

### George Pavlidis (view profile)

on 29 Jan 2018 at 15:36

You are right, of course, regarding the sign flipping. But the interesting thing was that I show the issue being repeated and getting the same flipped results. In theory there is nothing wrong with getting the opposite direction but the way it comes up in the computations was almost consistent with some pattern relating to (possibly) the matlab engine...

John D'Errico

### John D'Errico (view profile)

on 29 Jan 2018 at 18:29

If you call eig twice, you will get the same sign. Eig and SVD are I recall deterministic (as opposed to tools likes svds or eigs). Call them twice with the EXACTLY same input, and you should get the same output.

But two different PCA codes may not get the same signs each time, because there are multiple subtly different ways of doing the computations. This can easily result in opposite signs between methods.

For example, you can do a PCA using eig OR svd. I will be quite confident that some of the signs will be arbitrarily flipped there between eig versus svd, and consistently so. So repeat the call, and they will not change.

George Pavlidis

### George Pavlidis (view profile)

on 29 Jan 2018 at 18:37

Thanks. I figured out the same thing but I was wondering if there is any insight on what's happening behind the scenes...