Got Questions? Get Answers.
Discover MakerZone

MATLAB and Simulink resources for Arduino, LEGO, and Raspberry Pi

Learn more

Discover what MATLAB® can do for your career.

Opportunities for recent engineering grads.

Apply Today

Thread Subject:
parfor loops and indexing

Subject: parfor loops and indexing

From: Jason Park

Date: 27 Oct, 2010 03:35:04

Message: 1 of 13

Using the parallel computing toolbox (PCT) for optimization problems, probably the most useful and convenient features are parfor loops and the option allowing for parallel computing in using optimization toolboxes (i.e. lsqcurvefit, fmincon). Yet, I receive error messages when trying to double index to two sweeping variables i and j using multiple parfor loops. For example:

parfor i=1:N % with N being a number

  R(i) = data(:,i) % data is referred to the real data set loaded using "load data.mat";

    parfor j=1:M % with M also being a number

      error(i,j) = function(...,R(i)) - data(i,j); % this is what is supposed to be minimized

    end
end

This is part of a function, say FUN, which will later be minimized using fmincon or lsqnonlin.

Any help as to why I receive error messages upon using parfor loops like above?

Thanks,
Jason

Subject: parfor loops and indexing

From: Edric M Ellis

Date: 27 Oct, 2010 07:28:51

Message: 2 of 13

"Jason Park" <jason.park@buseco.monash.edu.au> writes:

> Using the parallel computing toolbox (PCT) for optimization problems,
> probably the most useful and convenient features are parfor loops and
> the option allowing for parallel computing in using optimization
> toolboxes (i.e. lsqcurvefit, fmincon). Yet, I receive error messages
> when trying to double index to two sweeping variables i and j using
> multiple parfor loops. For example:

Just to be clear - there's no extra parallelism by running nested PARFOR
loops. All the MATLABPOOL workers are used up by the outermost PARFOR
loop, any inner loops will run as standard FOR loops (but with the
disadvantage that they must obey the extra constraints on PARFOR loops).

> parfor i=1:N % with N being a number
>
> R(i) = data(:,i) % data is referred to the real data set loaded using "load data.mat";
>
> parfor j=1:M % with M also being a number
>
> error(i,j) = function(...,R(i)) - data(i,j); % this is what is supposed to be minimized
>
> end
> end
>
> This is part of a function, say FUN, which will later be minimized using fmincon
> or lsqnonlin.
>
> Any help as to why I receive error messages upon using parfor loops
> like above?

Looks like the problem is that you're trying to calculate "error" as the
return from that loop. Aside from the fact that error is the name of a
rather useful function in MATLAB, PARFOR doesn't currently support that
form of indexing for output (sliced) output variables - I would recast
that loop as follows:

parfor i = 1:N
  ...
  tmp = zeros(1,M);
  for j = 1:M
    tmp(j) = function(...);
  end
  error(i,:) = tmp;
end

which means that the indexing expression for "error" now matches the
PARFOR constraints. See

http://www.mathworks.com/help/toolbox/distcomp/brdqtjj-1.html#bq_of7_-1

for more about sliced variables.

Cheers,

Edric.

Subject: parfor loops and indexing

From: Jason Park

Date: 27 Oct, 2010 14:02:05

Message: 3 of 13

Thanks Edric. It was really useful how you explained it, so was the link you put up.
I'm trying to figure out a way to build it in such a fashion that I could match each element to the corresponding cell in the data set, which is also arranged in a matrix (any idea about this?).

Recalling from the previous post: ERR(i,j) = FCN(...,R(i)) - data(i,j),
1) I was wondering how those PARFOR loops work once my "error" function (or an objective function, to be more general) is submitted to an optimization solver (i.e., lsqnonlin, fmincon, etc.). Does each worker calculates its share and communicate that to the main optimization solver? I thought the parameters that the solver is going to solve for in the end are supposed to be jointly determined, but as far as I understand, what this PCT is distributing "mutually independent" jobs to each worker, which doesn't seem to serve the function I need for my problem (please correct me if I'm wrong) I was worrying if the PCT could fully utilize the multiple cores on my processor this way.
2) What if I got another PARFOR loop in the function FCN above? I remember that in the link you referred me to it kind of says itt can call a function that contains another PARFOR loop, but I'm wondering if there is any other caution to take on this issue.

I'm just new to the PCT and it seems like I need heaps of work on this to be able to use it to its full extent.

Thank you,
Jason

Subject: parfor loops and indexing

From: Edric M Ellis

Date: 27 Oct, 2010 15:25:01

Message: 4 of 13

"Jason Park" <jason.park@buseco.monash.edu.au> writes:

> Recalling from the previous post: ERR(i,j) = FCN(...,R(i)) - data(i,j), 1) I was
> wondering how those PARFOR loops work once my "error" function (or an objective
> function, to be more general) is submitted to an optimization solver (i.e.,
> lsqnonlin, fmincon, etc.). Does each worker calculates its share and communicate
> that to the main optimization solver? I thought the parameters that the solver
> is going to solve for in the end are supposed to be jointly determined, but as
> far as I understand, what this PCT is distributing "mutually independent" jobs
> to each worker, which doesn't seem to serve the function I need for my problem
> (please correct me if I'm wrong) I was worrying if the PCT could fully utilize
> the multiple cores on my processor this way. 2) What if I got another PARFOR
> loop in the function FCN above? I remember that in the link you referred me to
> it kind of says itt can call a function that contains another PARFOR loop, but
> I'm wondering if there is any other caution to take on this issue.

Hm, I'm a bit confused about where you're going with this - do you want
to use the built-in parallelism (via PCT/MATLABPOOL) from the
Optimization Toolbox? If so, you should start here:

http://www.mathworks.com/help/toolbox/optim/ug/briutqn.html

Basically, in that case - you don't need to add any parallelism to your
stuff at all, the parallelism is all applied by the solver.

Or, are you trying to write an objective function to use with a solver?
If so, you need to encapsulate your parallelism in your objective
function so that the solver can simply invoke it without needing to know
that you've used PARFOR behind the scenes.

You should not do both (i.e. use the parallel options in Optimization
Toolbox AND parallelise your code with PARFOR), as this will not be
beneficial.

> I'm just new to the PCT and it seems like I need heaps of work on this
> to be able to use it to its full extent.

We hope not too much! Let me know (of-group if you'd prefer) any
suggestions for how we might make life easier for you.


Cheers,

Edric.

Subject: parfor loops and indexing

From: Jason Park

Date: 28 Oct, 2010 12:32:06

Message: 5 of 13

I'll try not to confuse too much, but please bear with me if it sounds somewhat confusing because as mentioned earlier I'm a newbie.

Ok. What I'd like to do is NOT running a number of optimization functions simultaneously (at least for the time being). I'm using a single LSQNONLIN to find the optimal parameters that minimizes the sum of squared errors, and that's why I've built the function ERR to begin with. ERR calculates the difference between my model and the data, which then is submitted to LSQNONLIN to evaluate it. As a reminder, ERR looks like the below (without using PARFOR):

function ret = ERR_FUN(input)
for i = 1:N
  R = data(i,m) % this variable indexes the m-th column in the data set
                      % and the data set is loaded using LOAD function
  T = data(i,n) % this variable indexes the n-th column in the data set
for j=1:M
  ERR(i,j) = FUN(input(1),input(2),input(3),R,T) - data(i,j)
end
end
ERR_reshaped = reshape(ERR,1,N*M);
ret = [ERR_reshaped]';

where the function FUN also contains a FOR loop:

function ret = FUN(A,B,C,R,T)
t = T:-.25:0;
h = [0 t(end:-1:1)];
K=[]; L=[];
S=size(h);
for i=1:S(2)
    K(i) = some_other_function1(A,B,C,R,h(i));
    L(i) = some_other_function2(A,B,C,R,,h(i));
end
ret = K./L

So far, I've tried:
1) changing the FOR loop in the function FUN to PARFOR
2) setting the parallel computing option in LSQNONLIN to 'UseParallel' = 'Always'
3) starting the solver with the line: matlabpool open local 4, and closing with the line: matlabpool close when it finishes its job.

What I'm trying to do are:
1) changing the FOR loops in ERR_FUN to PARFOR loops (but I don't know if it's possible, how if possible, and whether this would make any change in time reduction even if I coded them in)

My questions are:
1) whether changing the FOR loops to PARFOR loops is possible (using variable slicing technique or something like that)
2) From your comments, it sounds like I shouldn't pass in the options setting as 'UseParallel' = 'Always' and use PARFOR loops in my functions at the same time. Is that what you mean? What do you mean by "parallelism"? Is it something that is explained in this link (following from the link you referred me to)?: http://www.mathworks.com/help/toolbox/optim/ug/briutqn-1.html
Basically, this is the part I don't get fully (sorry again. I've just recently encountered this PCT. It seems that my uni recently purchased the license for it):
> Basically, in that case - you don't need to add any parallelism to your
> stuff at all, the parallelism is all applied by the solver.
>
> Or, are you trying to write an objective function to use with a solver?
> If so, you need to encapsulate your parallelism in your objective
> function so that the solver can simply invoke it without needing to know
> that you've used PARFOR behind the scenes.
>
> You should not do both (i.e. use the parallel options in Optimization
> Toolbox AND parallelise your code with PARFOR), as this will not be
> beneficial.

I hope this clarifies what I was trying to deliver.
Thank you so much, Edric.

Cheers,
Jason

Subject: parfor loops and indexing

From: Edric M Ellis

Date: 1 Nov, 2010 08:39:14

Message: 6 of 13

"Jason Park" <jason.park@buseco.monash.edu.au> writes:

> Ok. What I'd like to do is NOT running a number of optimization
> functions simultaneously (at least for the time being). I'm using a
> single LSQNONLIN to find the optimal parameters that minimizes the sum
> of squared errors, and that's why I've built the function ERR to begin
> with. ERR calculates the difference between my model and the data,
> which then is submitted to LSQNONLIN to evaluate it. As a reminder,
> ERR looks like the below (without using PARFOR):
>
> function ret = ERR_FUN(input)
> for i = 1:N
> R = data(i,m) % this variable indexes the m-th column in the data set
> % and the data set is loaded using LOAD function
> T = data(i,n) % this variable indexes the n-th column in the data set
> for j=1:M
> ERR(i,j) = FUN(input(1),input(2),input(3),R,T) - data(i,j)
> end
> end
> ERR_reshaped = reshape(ERR,1,N*M);
> ret = [ERR_reshaped]';
>
> where the function FUN also contains a FOR loop:
>
> function ret = FUN(A,B,C,R,T)
> t = T:-.25:0;
> h = [0 t(end:-1:1)];
> K=[]; L=[];
> S=size(h);
> for i=1:S(2) K(i) = some_other_function1(A,B,C,R,h(i));
> L(i) = some_other_function2(A,B,C,R,,h(i));
> end
> ret = K./L
>
> So far, I've tried:
> 1) changing the FOR loop in the function FUN to PARFOR
> 2) setting the parallel computing option in LSQNONLIN to 'UseParallel' = 'Always'
> 3) starting the solver with the line: matlabpool open local 4, and closing with the line: matlabpool close when it finishes its job.
>
> What I'm trying to do are: 1) changing the FOR loops in ERR_FUN to PARFOR loops
> (but I don't know if it's possible, how if possible, and whether this would make
> any change in time reduction even if I coded them in)
>
> My questions are:
> 1) whether changing the FOR loops to PARFOR loops is possible (using
> variable slicing technique or something like that)

It's generally best to apply parallelism at the outermost layer,
providing there's enough parallelism there. By that I mean that there
needs to be enough iterations of the loops to keep the workers busy. So,
in a case like this:

for i = 1:2
  for j = 1:10000
    ... do stuff ...
  end
end

applying a PARFOR at the outer layer would mean that at most 2 workers
could be employed - so in that case, it would be better to PARFOR the
inner loop.

In your case, I would first attempt to make the "for i = 1:N" loop in
ERR_FUN be a PARFOR loop. But that might need some slight trickiness to
deal with the way you're assigning into ERR. You might need to re-write
it like so:

parfor i = 1:N
  err_i = zeros( 1, M )
  for j = 1:M
    err_i(j) = FUN(...);
  end
  ERR(i,:) = err_i;
end

> 2) From your comments, it sounds like I shouldn't pass in the options setting as
> UseParallel' = 'Always' and use PARFOR loops in my functions at the same
> time. Is that what you mean?

Yep, that's right. When the optim stuff is set to UseParallel, then it
uses a PARFOR loop to evaluate multiple instances of your objective
function in parallel. This is usually the simplest way of getting stuff
running in parallel, and should give you speedup without you having to
modify your code - however, you may get better speedup by adding the
PARFOR inside your objective function as you're attempting, especially
in the case where there are few parameters to optimize. (I'm not all
that familiar with just how the optim stuff uses PARFOR, but I believe
the number of iterations in the PARFOR loops it uses are driven
basically by the number of parameters)

Cheers,

Edric.

Subject: parfor loops and indexing

From: Paul Kerr-Delworth

Date: 3 Nov, 2010 10:49:04

Message: 7 of 13

Edric M Ellis <eellis@mathworks.com> wrote in message <ytw8w1digjx.fsf@uk-eellis-deb5-64.mathworks.co.uk>...
> "Jason Park" <jason.park@buseco.monash.edu.au> writes:
>
> > Ok. What I'd like to do is NOT running a number of optimization
> > functions simultaneously (at least for the time being). I'm using a
> > single LSQNONLIN to find the optimal parameters that minimizes the sum
> > of squared errors, and that's why I've built the function ERR to begin
> > with. ERR calculates the difference between my model and the data,
> > which then is submitted to LSQNONLIN to evaluate it. As a reminder,
> > ERR looks like the below (without using PARFOR):
> >
> > function ret = ERR_FUN(input)
> > for i = 1:N
> > R = data(i,m) % this variable indexes the m-th column in the data set
> > % and the data set is loaded using LOAD function
> > T = data(i,n) % this variable indexes the n-th column in the data set
> > for j=1:M
> > ERR(i,j) = FUN(input(1),input(2),input(3),R,T) - data(i,j)
> > end
> > end
> > ERR_reshaped = reshape(ERR,1,N*M);
> > ret = [ERR_reshaped]';
> >
> > where the function FUN also contains a FOR loop:
> >
> > function ret = FUN(A,B,C,R,T)
> > t = T:-.25:0;
> > h = [0 t(end:-1:1)];
> > K=[]; L=[];
> > S=size(h);
> > for i=1:S(2) K(i) = some_other_function1(A,B,C,R,h(i));
> > L(i) = some_other_function2(A,B,C,R,,h(i));
> > end
> > ret = K./L
> >
> > So far, I've tried:
> > 1) changing the FOR loop in the function FUN to PARFOR
> > 2) setting the parallel computing option in LSQNONLIN to 'UseParallel' = 'Always'
> > 3) starting the solver with the line: matlabpool open local 4, and closing with the line: matlabpool close when it finishes its job.
> >
> > What I'm trying to do are: 1) changing the FOR loops in ERR_FUN to PARFOR loops
> > (but I don't know if it's possible, how if possible, and whether this would make
> > any change in time reduction even if I coded them in)
> >
> > My questions are:
> > 1) whether changing the FOR loops to PARFOR loops is possible (using
> > variable slicing technique or something like that)
>
> It's generally best to apply parallelism at the outermost layer,
> providing there's enough parallelism there. By that I mean that there
> needs to be enough iterations of the loops to keep the workers busy. So,
> in a case like this:
>
> for i = 1:2
> for j = 1:10000
> ... do stuff ...
> end
> end
>
> applying a PARFOR at the outer layer would mean that at most 2 workers
> could be employed - so in that case, it would be better to PARFOR the
> inner loop.
>
> In your case, I would first attempt to make the "for i = 1:N" loop in
> ERR_FUN be a PARFOR loop. But that might need some slight trickiness to
> deal with the way you're assigning into ERR. You might need to re-write
> it like so:
>
> parfor i = 1:N
> err_i = zeros( 1, M )
> for j = 1:M
> err_i(j) = FUN(...);
> end
> ERR(i,:) = err_i;
> end
>
> > 2) From your comments, it sounds like I shouldn't pass in the options setting as
> > UseParallel' = 'Always' and use PARFOR loops in my functions at the same
> > time. Is that what you mean?
>
> Yep, that's right. When the optim stuff is set to UseParallel, then it
> uses a PARFOR loop to evaluate multiple instances of your objective
> function in parallel. This is usually the simplest way of getting stuff
> running in parallel, and should give you speedup without you having to
> modify your code - however, you may get better speedup by adding the
> PARFOR inside your objective function as you're attempting, especially
> in the case where there are few parameters to optimize. (I'm not all
> that familiar with just how the optim stuff uses PARFOR, but I believe
> the number of iterations in the PARFOR loops it uses are driven
> basically by the number of parameters)
>
> Cheers,
>
> Edric.

Hi Jason,

Just to confirm what Edric has said regarding the use of PARFOR inside Optimization Toolbox, if you set 'UseParallel' to 'always' for LSQNONLIN then the number of iterations in the PARFOR loops used by LSQNONLIN is equal to the number of parameters (variables) in your problem. Optimization toolbox uses PARFOR to speed up the estimation of gradients (performed via finite differences) inside an optimization. When 'UseParallel' is set to 'always', the calcualtion of the gradient in each variable direction is performed in parallel.

If you'd like some more background on how the Optimization Toolbox solvers use parallel computing, see the following chapter in the Optimization Toolbox documentation

<<http://www.mathworks.com/help/toolbox/optim/ug/briutqn.html>>

Hope this helps.

Cheers,

Paul

Subject: parfor loops and indexing

From: Jason Park

Date: 5 Nov, 2010 21:21:04

Message: 8 of 13

A lot of thanks to you both, Edric and Paul!

So far, I've read quite a few documents about PCT including the link I was referred to, and tried the variable slicing technique and was about to have a go at it. Hope it works after all.

If I'm getting it right, as Paul mentioned, 'UseParallel'='Always' limits the maximum number of iterations to the number of parameters of my model despite the fact that it evaluates the gradients in parallel which might boost up the speed. So what you'd suggest is to change the outermost for loop to a PARFOR loop, slice up the double-indexed variable, and turn the option setting i.e., 'UseParallel'='Always' off. Is this correct? But out of curiosity, what if I coded in the PARFOR part but left the options setting to 'Always'? Would it mess up or at least slow down the processing?

Another quick question is: how is multicore processing, which the PCT takes advantage of, is different from multithreading in MATLAB? Since my laptop has i7 processor Q 720 which has quad cores as far as I know, I've assigned to 4 workers to run in parallel but having done this, it still seems that MATLAB is not using up the full capacity - only around 50% (unlike my Core 2 Quad desktop at uni whose CPU usage reaches near 100% with nothing else running in background). So I was just wondering if this has got anything to do with multithreading.

I'll keep you in the loop on this as I explore the new world of PCT :)

Thanks again and kind regards,

Jason

Subject: parfor loops and indexing

From: Mike Thomas

Date: 6 Nov, 2010 03:41:03

Message: 9 of 13

On a completely unrelated note, if Jason was born 65 million years earlier, he might be named "Jurasic Park" instead.

:)

Mike Thomas
Liquid Nitrogen Overclocking
http://www.LiquidNitrogenOverclocking.com

Subject: parfor loops and indexing

From: Edric M Ellis

Date: 8 Nov, 2010 08:11:56

Message: 10 of 13

"Jason Park" <jason.park@buseco.monash.edu.au> writes:

> If I'm getting it right, as Paul mentioned, 'UseParallel'='Always'
> limits the maximum number of iterations to the number of parameters of
> my model despite the fact that it evaluates the gradients in parallel
> which might boost up the speed. So what you'd suggest is to change the
> outermost for loop to a PARFOR loop, slice up the double-indexed
> variable, and turn the option setting i.e., 'UseParallel'='Always'
> off. Is this correct? But out of curiosity, what if I coded in the
> PARFOR part but left the options setting to 'Always'? Would it mess up
> or at least slow down the processing?

I think that's what I was suggesting. But it's probably worth trying
things both ways around. If you have 'UseParallel' == 'Always' AND a
PARFOR loop in the cost function, then you might indeed find a slowdown
as we have to go through the PARFOR machinery even though no parallelism
is being applied.

> Another quick question is: how is multicore processing, which the PCT
> takes advantage of, is different from multithreading in MATLAB?

The stuff you get through PCT is all fully explicit parallelism using
multiple MATLAB processes (which could be located on physically separate
machines if you have MDCS licences).

MATLAB provides implicit parallelism for many operations using multiple
threads on one machine. For example, see here

<http://www.mathworks.com/support/solutions/en/data/1-4PG4AN/?solution=1-4PG4AN>

> Since my laptop has i7 processor Q 720 which has quad cores as far as
> I know, I've assigned to 4 workers to run in parallel but having done
> this, it still seems that MATLAB is not using up the full capacity -
> only around 50% (unlike my Core 2 Quad desktop at uni whose CPU usage
> reaches near 100% with nothing else running in background). So I was
> just wondering if this has got anything to do with multithreading.

PCT workers are set to use only a single computational thread, since the
intention is that you run one worker per core, and generally speaking
oversubscribing the cores doesn't get much improvement in performance -
and can sometimes lead to really poor performance. (If you ran 4 local
workers in the "default" threading mode on your machine, they'd each
have 4 computation threads - a total of 16 - and your OS may or may not
be able to schedule these efficiently).

Cheers,

Edric.

Subject: parfor loops and indexing

From: Jason Park

Date: 9 Nov, 2010 12:51:03

Message: 11 of 13

Thanks Edric. You and other experts who have added comments through the discussion helped me a lot. I appreciate it so much.

Now I kind of get what 'parallelism' is about (at least very abstractly). Your suggestion to substitute PARFOR in the place of the outermost FOR loop is well understood with setting 'UseParallel'=='Never'. Although it wasn't on the outermost loop, I've replaced one of the FOR loops with PARFOR and seen a significant improvement in speed with the setting 'UseParallel'=='Always' before your advice, and changing the setting as advised, it seems to slow down compared to otherwise. What do you think is happening in here?

Also, I've just experienced some technical problem. I've been running MATLAB for several days in a row, and all of a sudden it shuts down by itself as if nothing ever happened! Is there any option setting that commands a close-down as it runs beyond some point of time? If there is, can I turn it off to allow MATLAB to run beyond whatever it is? Any side effect by doing this though?

Thanks again you all!

Jason

P.S. Mike. That Jurassic Park joke was a bit too old. One of my mates said it like more than a decade ago, and I didn't even laugh back then. But good try buddy! :)

Subject: parfor loops and indexing

From: Edric M Ellis

Date: 9 Nov, 2010 14:14:15

Message: 12 of 13

"Jason Park" <jason.park@buseco.monash.edu.au> writes:

> Now I kind of get what 'parallelism' is about (at least very
> abstractly). Your suggestion to substitute PARFOR in the place of the
> outermost FOR loop is well understood with setting
> 'UseParallel'=='Never'. Although it wasn't on the outermost loop, I've
> replaced one of the FOR loops with PARFOR and seen a significant
> improvement in speed with the setting 'UseParallel'=='Always' before
> your advice, and changing the setting as advised, it seems to slow
> down compared to otherwise. What do you think is happening in here?

It's hard to say exactly what's going on here. As I've mentioned, only
the first "PARFOR" the client hits will get parallelized, any inner ones
will run in serial mode on the workers. Generally it's best to make the
parallelism happen at the outer-most layer - but there are several
things that can confound that. Firstly, if the outermost loop has
insufficient parallelism - e.g.

parfor i = 1:2
  for j = 1:1e6
    doSomething();
  end
end

has a maximum possible speedup of 2x regardless of the size of the
MATLABPOOL.

The other consideration is data transfer. If you make a large array
outside a PARFOR loop, and use the whole value inside the loop, we have
to transmit all the contents to the workers - this can be slow. E.g.

z = rand( 1, 1e6 );
parfor i = 1:1e4
  result(i) = sum( z(i:end) );
end

in the above (silly) example, the whole value of "z" must be transmitted
to each worker.

> Also, I've just experienced some technical problem. I've been running
> MATLAB for several days in a row, and all of a sudden it shuts down by
> itself as if nothing ever happened! Is there any option setting that
> commands a close-down as it runs beyond some point of time? If there
> is, can I turn it off to allow MATLAB to run beyond whatever it is?
> Any side effect by doing this though?

That definitely shouldn't happen - if you do figure out what might be
causing it, either post here or contact tech support.

Cheers,

Edric.

Subject: parfor loops and indexing

From: Ernst Kloppenburg

Date: 20 Jun, 2011 08:40:04

Message: 13 of 13

"Jason Park" wrote in message <iabqg6$dk4$1@fred.mathworks.com>...
> 2) setting the parallel computing option in LSQNONLIN to 'UseParallel' = 'Always'

with the option useparallel='always', the optimization toolbox does parallelize gradient computions for some optimization algorithms. But NOT for lsqnonlin. See documentation.

Tags for this Thread

What are tags?

A tag is like a keyword or category label associated with each thread. Tags make it easier for you to find threads of interest.

Anyone can tag a thread. Tags are public and visible to everyone.

Contact us