I have a large (n x n), sparse matrix with N entries per row, named W. I would like to normalize this matrix so that each row sums to 1, but I'm running into some numerical issues.
I store the nonzero values in the following matrix:
I then calculate the matrix W, which I want to row-normalize, and normalize it as follows:
W = sparse(idx, jdx, reshape(vals_W,1,NN*n));
sum_vals = sum(W,2);
W_normalized = sparse(idx, jdx, reshape(vals_W./sum_vals,1,NN*n));
However, the following two yield very different values:
sum1 = sum(vals_W./sum_vals,2);
sum2 = sum(W_normalized,2));
They seem like they should be theoretically equivalent. Is there something about the sparsity that causes this issue? Or am I just coding this incorrectly? What's the best way to get around this problem?