http://www.mathworks.com/matlabcentral/newsreader/view_thread/293747
MATLAB Central Newsreader  How to make the function 'norm' treat its input as vectors?
Feed for thread: How to make the function 'norm' treat its input as vectors?
enus
©19942015 by MathWorks, Inc.
webmaster@mathworks.com
MATLAB Central Newsreader
http://blogs.law.harvard.edu/tech/rss
60
MathWorks
http://www.mathworks.com/images/membrane_icon.gif

Wed, 13 Oct 2010 03:24:04 +0000
How to make the function 'norm' treat its input as vectors?
http://www.mathworks.com/matlabcentral/newsreader/view_thread/293747#787292
Peipei Yang
Hello, everyone.<br>
I have a question for your help.<br>
In matlab, the function norm could deal with matrix or vector, and it will deal with the input variable differently according to its type for norm(A), it will return the largest singular value of A if A is a matrix, while return sum(abs(A).^2)^(1/2) if A is a vector. However, I found that for a vector A, the running time of norm(A) is shorter than sum(abs(A).^2)^(1/2), so I prefer using norm(A) to calculate the norm of a vector.<br>
Now I have m vectors storing in a matrix A, each row of which is a vector, and I want to calculate their norms respectively. Maybe it is convenient to calculate them by<br>
sum(A.^2,2)^(1/2), but as what I said before, using the function 'norm' could spend less calculation time. How ever, directly using norm(A) will retun the matrix norm of A instead of the norms of each vectors. How could I make the function 'norm' treat its input A as a sequence of vectors rather than a matrix?<br>
Thank you!<br>
<br>
I have presented this question before but with a wrong subject. This is the first time I post a message here and I can't edit or delete the message, so I have to repost this question with the correct subject. If you know how to delete the message posted by myself, please tell me. Thanks very much!

Wed, 13 Oct 2010 06:43:04 +0000
Re: How to make the function 'norm' treat its input as vectors?
http://www.mathworks.com/matlabcentral/newsreader/view_thread/293747#787317
Bruno Luong
What do you get when you run this function:<br>
<br>
function benchnorm<br>
<br>
n = 1e6;<br>
ntest = 100;<br>
time = zeros(ntest,3);<br>
for k=1:ntest<br>
a=rand(1,n);<br>
<br>
tic;<br>
b = norm(a);<br>
time(k,1) = toc;<br>
<br>
tic;<br>
b = sqrt(sum(a.*a));<br>
time(k,2) = toc;<br>
<br>
tic;<br>
b = sqrt(sum(a.^2));<br>
time(k,3) = toc;<br>
end<br>
<br>
time = median(time,1);<br>
fprintf('norm(a) > %f\n', time(1));<br>
fprintf('sqrt(sum(a.*a)) > %f\n', time(2));<br>
fprintf('sqrt(sum(a.^2)) > %f\n', time(3));<br>
<br>
end<br>
<br>
% Bruno

Wed, 13 Oct 2010 13:50:07 +0000
Re: How to make the function 'norm' treat its input as vectors?
http://www.mathworks.com/matlabcentral/newsreader/view_thread/293747#787421
Peipei Yang
Hello, thanks for your attention.<br>
I copy the code to run in my platform and got the result as follow:<br>
<br>
norm(a) > 0.002759<br>
sqrt(sum(a.*a)) > 0.007179<br>
sqrt(sum(a.^2)) > 0.007588<br>
<br>
So it looks like that norm(a) takes less calculation time than the other two, and therefore I would like to use norm(a) to calculate the norm of vectors, but it treat the input as a matrix rather than a sequence of vectors. Is there any way to deal with the problem?<br>
Thanks.<br>
<br>
"Bruno Luong" <b.luong@fogale.findmycountry> wrote in message <i93kdo$o8f$1@fred.mathworks.com>...<br>
> What do you get when you run this function:<br>
> <br>
> function benchnorm<br>
> <br>
> n = 1e6;<br>
> ntest = 100;<br>
> time = zeros(ntest,3);<br>
> for k=1:ntest<br>
> a=rand(1,n);<br>
> <br>
> tic;<br>
> b = norm(a);<br>
> time(k,1) = toc;<br>
> <br>
> tic;<br>
> b = sqrt(sum(a.*a));<br>
> time(k,2) = toc;<br>
> <br>
> tic;<br>
> b = sqrt(sum(a.^2));<br>
> time(k,3) = toc;<br>
> end<br>
> <br>
> time = median(time,1);<br>
> fprintf('norm(a) > %f\n', time(1));<br>
> fprintf('sqrt(sum(a.*a)) > %f\n', time(2));<br>
> fprintf('sqrt(sum(a.^2)) > %f\n', time(3));<br>
> <br>
> end<br>
> <br>
> % Bruno

Wed, 13 Oct 2010 15:03:03 +0000
Re: How to make the function 'norm' treat its input as vectors?
http://www.mathworks.com/matlabcentral/newsreader/view_thread/293747#787452
Jan Simon
Dear Yang,<br>
<br>
I've created a CMex function for the 2norm along a specific dimension.<br>
For [1 x 10000] vectors it is 20% faster than NORM, but for [1 x 100] vectors the overhead of calling the Mex eats up the performance, such that NORM is 35% faster than the CMex. <br>
Anyhow, for arrays allocating the memory for the x.*x array and the vector replied by SUM consumes a remarkable chunk of time, such that the Cmex, which calculates all values elementwise, is 20 to 50% faster than SQRT(SUM(x.*x)).<br>
If you are interested, I could publish it.<br>
<br>
But I'm still surprised, that such a common task is not solved by a builtin function.<br>
<br>
Kind regards, Jan

Wed, 13 Oct 2010 16:25:07 +0000
Re: How to make the function 'norm' treat its input as vectors?
http://www.mathworks.com/matlabcentral/newsreader/view_thread/293747#787476
Bruno Luong
"Peipei Yang" <ppyang84@gmail.com> wrote in message <i94def$3h4$1@fred.mathworks.com>...<br>
> Hello, thanks for your attention.<br>
> I copy the code to run in my platform and got the result as follow:<br>
> <br>
> norm(a) > 0.002759<br>
> sqrt(sum(a.*a)) > 0.007179<br>
> sqrt(sum(a.^2)) > 0.007588<br>
> <br>
<br>
Indeed, there is a significant difference in advantage of NORM. Strangely on my two computers, norm(a) needs similar CPU time or even slightly slower than the two other methods. May be something to do with parallelized builtin function.<br>
<br>
As I can't get the same result, I'm not sure what to do next to accelerate the calculation. May be you can try Jan's code.<br>
<br>
Bruno

Wed, 13 Oct 2010 18:54:04 +0000
Re: How to make the function 'norm' treat its input as vectors?
http://www.mathworks.com/matlabcentral/newsreader/view_thread/293747#787525
Matt J
By modifying Bruno's benchnorm, I've found some contenders that are faster than norm().<br>
<br>
<br>
norm(a) > 0.003428<br>
sqrt(a*a.') > 0.001468<br>
sqrt(mtimesx(a,a,'t')) > 0.001392<br>
<br>
The problem is that a*a.' can't be easily generalized to multiple rows, but the approach using mtimesx can. It is available here,<br>
<br>
<a href="http://www.mathworks.com/matlabcentral/fileexchange/25977mtimesxfastmatrixmultiplywithmultidimensionalsupport">http://www.mathworks.com/matlabcentral/fileexchange/25977mtimesxfastmatrixmultiplywithmultidimensionalsupport</a>

Wed, 13 Oct 2010 19:23:05 +0000
Re: How to make the function 'norm' treat its input as vectors?
http://www.mathworks.com/matlabcentral/newsreader/view_thread/293747#787543
Bruno Luong
You might try this package (MEX setup required)<br>
<a href="http://www.mathworks.com/matlabcentral/fileexchange/24576">http://www.mathworks.com/matlabcentral/fileexchange/24576</a><br>
<br>
A=rand(1e6,10);<br>
<br>
% norm of each column of A<br>
res = zeros(1,size(A,2));<br>
for k=1:size(A,2)<br>
Ak = inplacecolumn(A,k);<br>
res(k) = norm(Ak);<br>
releaseinplace(Ak);<br>
end<br>
<br>
Bruno

Wed, 13 Oct 2010 19:26:03 +0000
Re: How to make the function 'norm' treat its input as vectors?
http://www.mathworks.com/matlabcentral/newsreader/view_thread/293747#787547
Matt J
"Peipei Yang" <ppyang84@gmail.com> wrote in message <i938ok$smb$1@fred.mathworks.com>...<br>
><br>
> Now I have m vectors storing in a matrix A, each row of which is a vector, and I want to calculate their norms respectively. <br>
=======<br>
<br>
It's usually a bad idea to concatenate vectors into a matrix rowwise, since columnwise memory access in MATLAB is much faster. (As a side note, it is unfortunate that many MATLAB functions, e.g. convhulln, ask you to organize matrices this way).<br>
<br>
If you can reorganize your code to store vectors columnwise, mtimesx will give you some pretty good acceleration. Compare:<br>
<br>
mtimesx SPEED;<br>
N = 1e5;<br>
M = 100;<br>
<br>
a=rand(N,M);<br>
<br>
%%Columnwise norms<br>
<br>
tic;<br>
b = sqrt(sum(a.*a));<br>
toc;<br>
%Elapsed time is 0.082442 seconds.<br>
<br>
<br>
<br>
tic;<br>
b = sqrt(sum(a.^2));<br>
toc;<br>
%Elapsed time is 0.092637 seconds.<br>
<br>
<br>
tic;<br>
a=reshape(a,N,[],M);<br>
b = sqrt(mtimesx(a,'t',a));<br>
b=b(:).';<br>
toc;<br>
%Elapsed time is 0.026846 seconds.

Wed, 13 Oct 2010 21:40:07 +0000
Re: How to make the function 'norm' treat its input as vectors?
http://www.mathworks.com/matlabcentral/newsreader/view_thread/293747#787591
Jan Simon
Dear Matt,<br>
<br>
> norm(a) > 0.003428<br>
> sqrt(a*a.') > 0.001468<br>
> sqrt(mtimesx(a,a,'t')) > 0.001392<br>
<br>
Fine! What a pitty, that this does not work for arrays.<br>
<br>
I've tried to let a Mex call BLAS:dnrm2 directly:<br>
x = rand(10000, 1);<br>
tic; for i=1:1000; q = sqrt(sum(x .* x)); clear('q'); end<br>
>> 0.075 sec<br>
tic; for i=1:1000; q = sqrt(x.' * x); clear('q'); end<br>
>> 0.038 sec<br>
tic; for i=1:1000; q = DNorm2(x); clear('q'); end<br>
>> 0.038 sec<br>
tic; for i=1:1000; q = DNorm2_BLAS(x); clear('q'); end<br>
>> 0.056 sec<br>
<br>
I've implemented 2 methods in the CMex DNorm2 for operating on a dimensions different for the first one:<br>
A) Calculate the norm over the subvectors along the processed dimension.<br>
This processes the input array a kind of rowwise.<br>
B) Accumulate the squared input values in the output element by element (columnwise) and calculate the square in the last iteration in addition. <br>
<br>
Wiile A) has the drawback, that it accesses the input array in steps (not neighboring elements), version B) has to cycle through the elements of the output several times.<br>
Therefore method A) is faster for [large x small] arrays (or equivalent ndim arrays), B) is faster for [small x large] arrays.<br>
Unfortunately I'm not able to determine "small" and "large" quantitatively. It seems to depend on the absolute and relative values, the cache size and the number of dimensions. So I gave up an applied the usual rule of thumb: 10000 is large...<br>
<br>
I think after this advertising I have to post the code in the FEX, although the function is not parallelized. I'm afraid that SQRT(SUM(X.*X)) benefits from Matlab's builtin parallelizations and beat DNorm2 without problems. But I cannot check this due to my historical PentiumM equipment.<br>
<br>
Jan

Wed, 13 Oct 2010 22:08:04 +0000
Re: How to make the function 'norm' treat its input as vectors?
http://www.mathworks.com/matlabcentral/newsreader/view_thread/293747#787601
Matt J
"Jan Simon" <matlab.THIS_YEAR@nMINUSsimon.de> wrote in message <i958vn$cjb$1@fred.mathworks.com>...<br>
> Dear Matt,<br>
> <br>
> > norm(a) > 0.003428<br>
> > sqrt(a*a.') > 0.001468<br>
> > sqrt(mtimesx(a,a,'t')) > 0.001392<br>
> <br>
> Fine! What a pitty, that this does not work for arrays.<br>
====<br>
<br>
It does! (See message 8).

Thu, 14 Oct 2010 02:17:03 +0000
Re: How to make the function 'norm' treat its input as vectors?
http://www.mathworks.com/matlabcentral/newsreader/view_thread/293747#787628
Peipei Yang
Dear all,<br>
There're so many good suggestions you provided for me which I will learn and try them.<br>
Thanks for your help! Thank you very much!

Thu, 14 Oct 2010 10:53:04 +0000
Re: How to make the function 'norm' treat its input as vectors?
http://www.mathworks.com/matlabcentral/newsreader/view_thread/293747#787724
Jan Simon
Dear Matt J,<br>
> > > norm(a) > 0.003428<br>
> > > sqrt(a*a.') > 0.001468<br>
<br>
> > Fine! What a pitty, that this does not work for arrays.<br>
<br>
> It does! (See message 8).<br>
<br>
It works, but the result is not the same and not what I expect as "norm":<br>
x = [1,2,3; 4,5,6]<br>
<br>
DNorm2(x, 1)<br>
>> 4.12 5.38 6.70<br>
<br>
sqrt(x' * x)<br>
>> 4.12 4.69 5.19<br>
4.69 5.38 6<br>
5.19 6 6.70<br>
<br>
DNorm2(x, 2)<br>
>> 3.74<br>
8.77<br>
<br>
sqrt(x * x.')<br>
>> 3.74 5.65<br>
5.65 8.77<br>
<br>
Ok, you find the wanted values on the diagonal. But for Ndim arrays this fails and for larger arrays the waste of memory and the time for extracting the diagonal matters.<br>
<br>
Kind regards, Jan

Thu, 14 Oct 2010 11:55:06 +0000
Re: How to make the function 'norm' treat its input as vectors?
http://www.mathworks.com/matlabcentral/newsreader/view_thread/293747#787738
Bruno Luong
Jan:<br>
> <br>
> Ok, you find the wanted values on the diagonal. But for Ndim arrays this fails and for larger arrays the waste of memory and the time for extracting the diagonal matters.<br>
<br>
Not necessary Jan:<br>
<br>
A=rand(1000,10);<br>
<br>
% norm along the column<br>
[m n]=size(A);<br>
b=mtimesx(reshape(A,[m 1 n]),'t',reshape(A,[m 1 n]));<br>
b=sqrt(reshape(b,1,n))<br>
<br>
% Bruno

Thu, 14 Oct 2010 12:26:04 +0000
Re: How to make the function 'norm' treat its input as vectors?
http://www.mathworks.com/matlabcentral/newsreader/view_thread/293747#787746
Jan Simon
Dear Bruno,<br>
<br>
> b=mtimesx(reshape(A,[m 1 n]),'t',reshape(A,[m 1 n]));<br>
<br>
A clear advantage for MTIMESX. But this is not reachable by (x*x') using Matlab's builtin MTIMES, is it?<br>
Now I'm convinced that MTIMESX is useful for me.<br>
<br>
Thanks, Jan

Thu, 14 Oct 2010 13:28:05 +0000
Re: How to make the function 'norm' treat its input as vectors?
http://www.mathworks.com/matlabcentral/newsreader/view_thread/293747#787763
Matt J
"Jan Simon" <matlab.THIS_YEAR@nMINUSsimon.de> wrote in message <i96sss$c0c$1@fred.mathworks.com>...<br>
<br>
>> > Fine! What a pitty, that this does not work for arrays.<br>
>> It does! (See message 8).<br>
>It works, but the result is not the same and not what I expect as "norm":<br>
>sqrt(x' * x)<br>
======<br>
<br>
But that is not the approach I proposed in Message 8! (it was the mtimesx approach).<br>
<br>
<br>
> > b=mtimesx(reshape(A,[m 1 n]),'t',reshape(A,[m 1 n]));<br>
> <br>
> A clear advantage for MTIMESX. But this is not reachable by (x*x') using Matlab's builtin MTIMES, is it?<br>
======<br>
<br>
No, and even mtimesx doesn't provide a way to do anything but columnwise norms with maximum efficiency (that I can see). So, your DNorm2 really ought to be the lead contender when it comes to norming along other dimensions.

Thu, 14 Oct 2010 16:05:07 +0000
Re: How to make the function 'norm' treat its input as vectors?
http://www.mathworks.com/matlabcentral/newsreader/view_thread/293747#787827
Jan Simon
Dear Matt,<br>
<br>
> But that is not the approach I proposed in Message 8! (it was the mtimesx approach).<br>
<br>
It is really hard for me to find message 8, obviously. I'm reading CSSM thrugh the MathWorks pages with Firefox on a 366MHz PentiumII laptop with 1024*768 pixels. Scrolling is not smooth and 60% of the screen width are wasted by *fancy* empty space. I think this is a combination of peephole optimization and data hiding.<br>
Especially the grey borders on the sides are really free of sense and information.<br>
<br>
> No, and even mtimesx doesn't provide a way to do anything but columnwise norms with maximum efficiency (that I can see). So, your DNorm2 really ought to be the lead contender when it comes to norming along other dimensions.<br>
<br>
Let's wait and see. I think the OMPparallelization of MTIMESX is hard to beat on a 4core processor. Perhaps somebody could omp my program, too.<br>
<br>
I've just a few experiences with mutlithreading fSGolayFilt using Windowsthreads. It took me some days to develop a (finally trivial) method to determine the optimal number of threads: starting 8 threads to filter a [20 x 8] matrix wastes time. A general [M x N] array can be split horizontally or vertically for the different threads. If I split a [M x 3] matrix on a dualcore processor to a [M x 2] and a [M x 1] array, one thread is bored 50% of the time, so splitting to two [M/2 x 3] arrays is better.<br>
Finally I spent 40h in the optimization to get my program accelerated by 2 seconds. But this was efficient, because the total processing time was reduced from 6 to 4 seconds, while 5 seconds is the magic psychological limit of causing stress for the users. After 5 seconds the user looses the feeling that the computer reacts to a his action. <br>
<br>
Bye, Jan

Thu, 14 Oct 2010 17:54:04 +0000
Re: How to make the function 'norm' treat its input as vectors?
http://www.mathworks.com/matlabcentral/newsreader/view_thread/293747#787857
Matt J
"Jan Simon" <matlab.THIS_YEAR@nMINUSsimon.de> wrote in message <i979nj$arb$1@fred.mathworks.com>...<br>
><br>
><br>
> Let's wait and see. I think the OMPparallelization of MTIMESX is hard to beat on a 4core processor. Perhaps somebody could omp my program, too.<br>
========<br>
<br>
The speed of MTIMESX won't matter. If you take norms along anything but columns using mtimesx, you will need to first permute/transpose the data, and that overhead alone can be equal the time it takes takes to do the norm calculation in unmexed MATLAB (see comparison below). So, if your tool beats unmexed MATLAB, you've won!<br>
<br>
%Rowwise norms<br>
mtimesx SPEED;<br>
N = 1e5;<br>
M = 100;<br>
<br>
a=rand(N,M);<br>
<br>
<br>
<br>
%%Use pure MATLAB<br>
tic;<br>
b = sqrt(sum(a.*a,2));<br>
toc;<br>
%Elapsed time is 0.088921 seconds.<br>
<br>
<br>
%%Using MTIMESX <br>
tic;<br>
a=reshape(a.',M,1,N);<br>
toc;<br>
%Elapsed time is 0.089419 seconds.<br>
<br>
b = sqrt(mtimesx(a,'t',a));<br>
b=b(:).';<br>

Thu, 14 Oct 2010 18:38:03 +0000
Re: How to make the function 'norm' treat its input as vectors?
http://www.mathworks.com/matlabcentral/newsreader/view_thread/293747#787867
Bruno Luong
"Matt J " <mattjacREMOVE@THISieee.spam> wrote in message <i97g3s$eij$1@fred.mathworks.com>...<br>
<br>
> <br>
> The speed of MTIMESX won't matter. If you take norms along anything but columns using mtimesx, you will need to first permute/transpose the data, <br>
<br>
Is that true? Are you sure any explicit transposition is carried out? May be James can confirm it.<br>
<br>
Bruno

Thu, 14 Oct 2010 18:47:03 +0000
Re: How to make the function 'norm' treat its input as vectors?
http://www.mathworks.com/matlabcentral/newsreader/view_thread/293747#787871
Matt J
"Bruno Luong" <b.luong@fogale.findmycountry> wrote in message <i97imb$4cl$1@fred.mathworks.com>...<br>
> "Matt J " <mattjacREMOVE@THISieee.spam> wrote in message <i97g3s$eij$1@fred.mathworks.com>...<br>
> <br>
> > <br>
> > The speed of MTIMESX won't matter. If you take norms along anything but columns using mtimesx, you will need to first permute/transpose the data, <br>
> <br>
> Is that true? Are you sure any explicit transposition is carried out? May be James can confirm it.<br>
======<br>
<br>
According to my best understanding of how MTIMESX works, yes. The partitioning of an nD array into submatrices by mtimesx is always in memorycontiguous blocks. Since rows of a matrix are not contiguous, I don't see how you can get mtimesx(A,B) to do operations between corresponding rows of A and B.

Thu, 14 Oct 2010 18:57:04 +0000
Re: How to make the function 'norm' treat its input as vectors?
http://www.mathworks.com/matlabcentral/newsreader/view_thread/293747#787873
Matt J
"Matt J " <mattjacREMOVE@THISieee.spam> wrote in message <i97j77$9db$1@fred.mathworks.com>...<br>
> "Bruno Luong" <b.luong@fogale.findmycountry> wrote in message <i97imb$4cl$1@fred.mathworks.com>...<br>
> > "Matt J " <mattjacREMOVE@THISieee.spam> wrote in message <i97g3s$eij$1@fred.mathworks.com>...<br>
> > <br>
> > > <br>
> > > The speed of MTIMESX won't matter. If you take norms along anything but columns using mtimesx, you will need to first permute/transpose the data, <br>
> > <br>
> > Is that true? Are you sure any explicit transposition is carried out? May be James can confirm it.<br>
> ======<br>
> <br>
> According to my best understanding of how MTIMESX works, yes. The partitioning of an nD array into submatrices by mtimesx is always in memorycontiguous blocks. Since rows of a matrix are not contiguous, I don't see how you can get mtimesx(A,B) to do operations between corresponding rows of A and B.<br>
======<br>
<br>
Sorry, just to be clear, I'm not saying that MTIMESX does any explicit transposition internally. I'm saying that in order to get MTIMESX to operate on discontiguous submatrices of the input arrays, you'll first need to permute the arrays in regular Mcode until the submatrices _are_contiguous.

Thu, 14 Oct 2010 19:35:03 +0000
Re: How to make the function 'norm' treat its input as vectors?
http://www.mathworks.com/matlabcentral/newsreader/view_thread/293747#787889
James Tursa
"Matt J " <mattjacREMOVE@THISieee.spam> wrote in message <i97j77$9db$1@fred.mathworks.com>...<br>
> "Bruno Luong" <b.luong@fogale.findmycountry> wrote in message <i97imb$4cl$1@fred.mathworks.com>...<br>
> > "Matt J " <mattjacREMOVE@THISieee.spam> wrote in message <i97g3s$eij$1@fred.mathworks.com>...<br>
> > <br>
> > > <br>
> > > The speed of MTIMESX won't matter. If you take norms along anything but columns using mtimesx, you will need to first permute/transpose the data, <br>
> > <br>
> > Is that true? Are you sure any explicit transposition is carried out? May be James can confirm it.<br>
> ======<br>
> <br>
> According to my best understanding of how MTIMESX works, yes. The partitioning of an nD array into submatrices by mtimesx is always in memorycontiguous blocks. Since rows of a matrix are not contiguous, I don't see how you can get mtimesx(A,B) to do operations between corresponding rows of A and B.<br>
<br>
Sorry I am late! I just noticed Jan Simon's post to my mtimesx FEX submission, which pointed me to this thread. So I am just now reading this thread for the first time and am not yet up to speed on the issues. So I will start by making some general comments about mtimesx as it applies to this calculation in one of Bruno's posts:<br>
<br>
> b=mtimesx(reshape(A,[m 1 n]),'t',reshape(A,[m 1 n]));<br>
<br>
The reshape function of course happens at the MATLAB level so this is transparent to mtimesx. The reshapes are pretty quick since they result in a shared data copy of A. So mtimesx will get these inputs<br>
<br>
A1(m,1,n) T * A2(m,1,n)<br>
<br>
So the end result is an nD dot product calculation of the columns. How mtimesx does this depends on the calculation mode chosen:<br>
<br>
'BLAS': Uses calls to DDOT in a loop for each column dot product.<br>
'MATLAB': Uses calls to DDOT in a loop for each column dot product.<br>
'SPEED': Uses custom C coded loops or DDOT calls, depending on which method it thinks may be faster (depends on complexity of inputs, whether it is a symmetric case with A1 & A2 actually pointing to same data area, etc.)<br>
'LOOPS': Uses custom C coded loops.<br>
'LOOPSOMP': Uses multithreaded C coded loops if m is large enough to warrant the extra overhead, else uses C coded loops.<br>
'SPEEDOMP': Makes a guess as to which of 'BLAS','LOOPS', or 'LOOPSOMP' is likely to be fastest and uses that.<br>
<br>
For this dot product of columns case, there is of course no need to physically transpose any input since it is mainly a dimension bookkeeping issue (a mx1 vector in memory is the same as it's transpose in memory).<br>
<br>
The multithreaded OpenMP stuff in mtimesx is very new, and was only recently added a couple of weeks ago. I have not yet implemented everything that I plan to. For example, in the above calculation mtimesx will only use OpenMP if the value of m is sufficiently large to warrant the extra overhead of of multithreading the individual dot products. i.e., it is only multithreading the first two dimensions of the calculation. What about the case for small m and large n? Obviously in that case one should not attempt to multithread the dot product calculation itself, but instead multithread on the third index. That is a future enhancement that I am currently working on but is *not* yet implemented in the current version of mtimesx.<br>
<br>
What about cases where a transpose operation involves a matrix and not a vector? In that case it is not just a bookkeeping issue ... there is a real transpose involved. In these cases mtimesx will typically just call a BLAS routine to do the work with appropriate inputs to indicate the transpose ... no physical transpose of the inputs is done a priori, it is simply done as part of the matrix multiply inside the BLAS routine itself.<br>
<br>
What about taking dot products of rows instead of columns? This is a different problem because of the contiguous data issue that has already been pointed out earlier in this thread. For the contiguous column case it was simple because the inputs could be reshaped into nD "vectors". Not so for the row case. It will hinge on whether or not the problem can be reformulated into a matrix multiply. I don't know how to do this for the general case, so at first look I think I agree with Matt that trying to use a matrix multiply for this will not work.<br>
<br>
James Tursa

Thu, 14 Oct 2010 19:36:04 +0000
Re: How to make the function 'norm' treat its input as vectors?
http://www.mathworks.com/matlabcentral/newsreader/view_thread/293747#787891
Bruno Luong
"Matt J " <mattjacREMOVE@THISieee.spam> wrote in message <i97jq0$hus$1@fred.mathworks.com>...<br>
<br>
> <br>
> Sorry, just to be clear, I'm not saying that MTIMESX does any explicit transposition internally. I'm saying that in order to get MTIMESX to operate on discontiguous submatrices of the input arrays, you'll first need to permute the arrays in regular Mcode until the submatrices _are_contiguous.<br>
<br>
OK I see what you meant. The time needed for matrix transposition is negligible, at least with my setup.<br>
<br>
Bruno

Fri, 15 Oct 2010 13:27:04 +0000
Re: How to make the function 'norm' treat its input as vectors?
http://www.mathworks.com/matlabcentral/newsreader/view_thread/293747#788069
Matt J
"Bruno Luong" <b.luong@fogale.findmycountry> wrote in message <i97m34$ikr$1@fred.mathworks.com>...<br>
> "Matt J " <mattjacREMOVE@THISieee.spam> wrote in message <i97jq0$hus$1@fred.mathworks.com>...<br>
> <br>
> > <br>
> > Sorry, just to be clear, I'm not saying that MTIMESX does any explicit transposition internally. I'm saying that in order to get MTIMESX to operate on discontiguous submatrices of the input arrays, you'll first need to permute the arrays in regular Mcode until the submatrices _are_contiguous.<br>
> <br>
> OK I see what you meant. The time needed for matrix transposition is negligible, at least with my setup.<br>
======<br>
<br>
Well, not neglibile in my setup, as examples have shown. In any case, transposition time will be a datasize dependent thing, as it involves a data copy.

Fri, 15 Oct 2010 15:39:04 +0000
Re: How to make the function 'norm' treat its input as vectors?
http://www.mathworks.com/matlabcentral/newsreader/view_thread/293747#788130
Jan Simon
Dear Matt,<br>
<br>
DNorm2 is published now.<br>
I've tried to find a smart method to decide for rowwise or columnwise processing. Unfortunately this depends on the compiler and the size of the first and 2nd level caches, such that my strategies remain very coarse.<br>
<br>
I'd be happy to see a speed comparison for mutlicore machines. And, as said already, please omp up my mex. ;)<br>
<br>
Jan

Fri, 15 Oct 2010 16:01:05 +0000
Re: How to make the function 'norm' treat its input as vectors?
http://www.mathworks.com/matlabcentral/newsreader/view_thread/293747#788139
James Tursa
"Jan Simon" <matlab.THIS_YEAR@nMINUSsimon.de> wrote in message <i99sio$g19$1@fred.mathworks.com>...<br>
> <br>
> I'd be happy to see a speed comparison for mutlicore machines. And, as said already, please omp up my mex. ;)<br>
> <br>
> Jan<br>
<br>
Hmmm ... that sounds mildly vulgar, Jan ... I may have to take you up on that offer! I will take a look ... at your code.<br>
<br>
James Tursa

Fri, 15 Oct 2010 16:38:04 +0000
Re: How to make the function 'norm' treat its input as vectors?
http://www.mathworks.com/matlabcentral/newsreader/view_thread/293747#788148
Jan Simon
Dear James,<br>
<br>
you can read my mind. In fact, my peronal impression is that OMPparallization is not very "sophisticated". Nevertheless, I admit that trying to develop a fast CMex function on a historical singlecore PentiumM is not just less sophisticated, but even a little bit goofy.<br>
<br>
Fortunately my taste is not important:<br>
TIC TOC can decide scientifically if omp is pomp.<br>
If it is fast, I'll like it.<br>
<br>
Thanks, Jan

Fri, 15 Oct 2010 17:18:04 +0000
Re: How to make the function 'norm' treat its input as vectors?
http://www.mathworks.com/matlabcentral/newsreader/view_thread/293747#788162
James Tursa
"Jan Simon" <matlab.THIS_YEAR@nMINUSsimon.de> wrote in message <i9a01c$55n$1@fred.mathworks.com>...<br>
> Dear James,<br>
> <br>
> you can read my mind. In fact, my peronal impression is that OMPparallization is not very "sophisticated". Nevertheless, I admit that trying to develop a fast CMex function on a historical singlecore PentiumM is not just less sophisticated, but even a little bit goofy.<br>
> <br>
> Fortunately my taste is not important:<br>
> TIC TOC can decide scientifically if omp is pomp.<br>
> If it is fast, I'll like it.<br>
> <br>
> Thanks, Jan<br>
<br>
I took just a quick look at your code and it seems there are several loops that would not be too hard to convert to OpenMP. For a simple test, I took your CalcAbs and ran it on an Intel Dual Core 2 32bit WinXP machine using R2010a with LCC and MSVC90 and did a baseline comparison to the MATLAB builtin abs function for a 20,000,000 size vector with 50% negative values:<br>
<br>
Relative time comparison:<br>
MSVC90 using OpenMP: Same speed as MATLAB abs<br>
MSVC90 w/o OpenMP: 30% slower than MATLAB abs<br>
LCC w/o OpenMP: 100% slower than MATLAB abs<br>
<br>
So just this simple test shows a worthwhile improvement if one has an OpenMP compiler available. I will look at your other loops over the weekend.<br>
<br>
James Tursa

Fri, 15 Oct 2010 17:45:05 +0000
Re: How to make the function 'norm' treat its input as vectors?
http://www.mathworks.com/matlabcentral/newsreader/view_thread/293747#788169
Bruno Luong
Can Jan shed a light on this friendly error message?<br>
<br>
>> uTest_DNorm2<br>
==== Test DNorm2: 15Oct2010 19:24:08<br>
Version: C:\Users\Bruno\Documents\matlab\DNorm2folder\DNorm2.mexw64<br>
<br>
ok: empty input<br>
??? Error using ==> uTest_DNorm2 at 53<br>
*** DNorm2: Too friendly on Dim=0<br>
<br>
Bruno

Fri, 15 Oct 2010 21:47:03 +0000
Re: How to make the function 'norm' treat its input as vectors?
http://www.mathworks.com/matlabcentral/newsreader/view_thread/293747#788232
Jan Simon
Dear Bruno,<br>
<br>
> Can Jan shed a light on this friendly error message?<br>
<br>
Thanks, Bruno, for reporting this bug!<br>
I've been always convinced that someday it might be useful to deliver the unittest functions.<br>
<br>
DNorm2 fails for the following problem:<br>
The 2nd input the dimension is 0 and reading it by this must fail:<br>
N = ((mwSize) (mxGetScalar(prhs[1]) + 0.5))  1;<br>
This works with 32bit addressing, when mwSize is a signed int32_T. But on a 64 bit machine mwSize is the unsigned size_t (according to tmptypes.h). <br>
Puh, what a silly error  and existing in a lot of my files.<br>
<br>
But this strange rounding method has a special purpose:<br>
The dimensions of Matlab's toolbox functions accept all numerical types as input. And I was afraid that somebody uses a SINGLE  coming from something like SIN(PI/2)  to address the 83886072.th dimension. This should be possible, if one can address 2^64 bits. ;)<br>
<br>
Fixed as fast as possible, Jan