I am using PCA to get identify the main data components and re-project the data to that space With X the input (row-wise MxN data), Y the output and D the required dimensions, typically this is done
[~,Y] = pca( X ); Y = Y(:,1:D);
If I want to do this "manually" I would compute the covariance matrix, then the eigenvalues/vectors and multiply the data with the new coordinate system as follows:
X = X-mean(X); % 'center' the data around zero A = (X'*X) / length(X); % compute the covariance matrix (normalised by the num of elements) [V,~] = eig(A); % compute the eigenvectors -- this results in a increasing ordering V = fliplr(V); % flip the eigenvectors so that the most significant come first V = V(:,1:D); % take only those eigenvectors required Y = X * V; % project the original data to the new coordinate system
Unfortunately the two above methods do not produce the same results! In particular, and this is the interesting part, some of the resulting values are equal and some have flipped signs! If I get the difference of the absolute values I get almost zero (in the order of 1e-14) for all matrix elements.
I tried even simple examples like the one presented here but I see the same issue.
Flipped signs is completely irrelevant.
An eigenvector is not unique, since you can multiply it by any constant and still have a valid eigenvector. A factor of -1 does not even impact the norm. So flipping the sign on it changes nothing, just that factor of -1.
A relative difference of 1e-14 is also irrelevant. Just floating point trash, because the computations were done in a different sequence. NEVER trust the least significant bits of a floating point number.
So, no, that is not the "interesting" part. In fact, nothing about what you have said is even remotely surprising.