Find the rigid transformation & Least Root Mean Square distance between two paired sets of points
% Find the Least Root Mean Square between two sets of N points in D dimensions
% and the rigid transformation (i.e. translation and rotation)
% to employ in order to bring one set that close to the other,
% Using the Kabsch (1976) algorithm.
% Note that the points are paired, i.e. we know which point in one set
% should be compared to a given point in the other set.
%
% References:
% 1) Kabsch W. A solution for the best rotation to relate two sets of vectors. Acta Cryst A 1976;32:9223.
% 2) Kabsch W. A discussion of the solution for the best rotation to relate two sets of vectors. Acta Cryst A 1978;34:8278.
% 3) http://cnx.org/content/m11608/latest/
% 4) http://en.wikipedia.org/wiki/Kabsch_algorithm
%
% We slightly generalize, allowing weights given to the points.
% Those weights are determined a priori and do not depend on the distances.
%
% We work in the convention that points are column vectors;
% some use the convention where they are row vectors instead.
%
% Input variables:
% P : a D*N matrix where P(a,i) is the ath coordinate of the ith point
% in the 1st representation
% Q : a D*N matrix where Q(a,i) is the ath coordinate of the ith point
% in the 2nd representation
% m : (Optional) a row vector of length N giving the weights, i.e. m(i) is
% the weight to be assigned to the deviation of the ith point.
% If not supplied, we take by default the unweighted (or equal weighted)
% m(i) = 1/N.
% The weights do not have to be normalized;
% we divide by the sum to ensure sum_{i=1}^N m(i) = 1.
% The weights must be nonnegative with at least one positive entry.
% Output variables:
% U : a proper orthogonal D*D matrix, representing the rotation
% r : a Ddimensional column vector, representing the translation
% lrms: the Least Root Mean Square
%
% Details:
% If p_i, q_i are the ith point (as a Ddimensional column vector)
% in the two representations, i.e. p_i = P(:,i) etc., and for
% p_i' = U p_i + r (' does not stand for transpose!)
% we have p_i' ~ q_i, that is,
% lrms = sqrt(sum_{i=1}^N m(i) (p_i'  q_i)^2)
% is the minimal rms when going over the possible U and r.
% (assuming the weights are already normalized).
%
1.18  Replaced loops by bsxfun() for efficiency,


1.17  24/10/2012 : corrected the check of whether a reflection is needed from

Joshua Kelly (view profile)
@Mina Henein (15 Feb 2017), Can you be more specific? It works for me. The only case it fails for me is if P and Q are of size (D,1), because rotating a point about its center of mass makes no sense. My test code is:
N=5;
angle = 1;
pts = rand(2,N)
%Simple 2x2 rotation matrix I wrote myself
%https://en.wikipedia.org/wiki/Rotation_matrix
M = rot2D(angle);
P = pts;
Q = M*pts;
[u,r,lrms] = Kabsch(P,Q);
Mu %Should be all zeros, or close to it
Mina Henein (view profile)
Trying the algorithm with a set of P & Q that are basically the same, expecting to get a U=eye(D), r= zeros(D,1) and lrms = 0, I get different results!
Am i doing anything wrong or is there a quick fix to that?
P =
Columns 1 through 11
1.0000 0.8421 0.6842 0.5263 0.3684 0.2105 0.0526 0.1053 0.2632 0.4210 0.5789
10.0000 7.3684 4.7368 2.1053 0.5263 3.1579 5.7895 8.4211 11.0526 13.6842 16.3158
10.0000 10.2632 10.5263 10.7895 11.0526 11.3158 11.5789 11.8421 12.1053 12.3684 12.6316
Columns 12 through 20
0.7368 0.8947 1.0526 1.2105 1.3684 1.5263 1.6842 1.8421 2.0000
18.9474 21.5790 24.2105 26.8421 29.4737 32.1053 34.7368 37.3684 40.0000
12.8947 13.1579 13.4210 13.6842 13.9473 14.2105 14.4736 14.7368 15.0000
Q =
Columns 1 through 11
1.0000 0.8421 0.6842 0.5263 0.3684 0.2105 0.0526 0.1053 0.2632 0.4211 0.5789
10.0000 7.3684 4.7368 2.1053 0.5263 3.1579 5.7895 8.4211 11.0526 13.6842 16.3158
10.0000 10.2632 10.5263 10.7895 11.0526 11.3158 11.5789 11.8421 12.1053 12.3684 12.6316
Columns 12 through 20
0.7368 0.8947 1.0526 1.2105 1.3684 1.5263 1.6842 1.8421 2.0000
18.9474 21.5789 24.2105 26.8421 29.4737 32.1053 34.7368 37.3684 40.0000
12.8947 13.1579 13.4211 13.6842 13.9474 14.2105 14.4737 14.7368 15.0000
Cheers,
Mina
Gianfranco (view profile)
Zvika Palkovich (view profile)
Justin Michael (view profile)
Daniel (view profile)
For large N most of the time is spent in the two for loops.
To remove them bsxfun is useful. Replace the lines in Kabsch.m:

Pdm = zeros(D,N) ;
for i=1:N
Pdm(:,i) = m(i)*P(:,i) ;
end
C = Pdm*Q' ;

by

C = bsxfun(@times,m,P)*Q';

and the lines

lrms = 0 ;
for i=1:N
lrms = lrms + m(i)*Diff(:,i)'*Diff(:,i) ;
end
lrms = sqrt(lrms) ;

by

lrms = sqrt(sum(sum(bsxfun(@times,m,Diff).*Diff))) ;

and the execution speed will be increased by ~70 for N=3'000'000
Andreas Martin (view profile)
I found a little mistake that results in numerical jitter under some circumstances.
If you change the line:
"if (det(C) < 0)" into "if (det(W*V') < 0)"
all works fine.
Cheers, Andreas