File Exchange

image thumbnail

Worker Object Wrapper

version (3.39 KB) by MathWorks Parallel Computing Toolbox Team
Simplifies managing resources such as large data within PARFOR loops and SPMD blocks


Updated 31 Oct 2016

View Version History

View License

Editor's Note: This file was selected as MATLAB Central Pick of the Week

Note that since MATLAB release R2015b, parallel.pool.Constant supersedes
the WorkerObjWrapper.
The WorkerObjWrapper is designed for situations where a piece of
data is needed multiple times inside the body of a PARFOR loop or
an SPMD block, and this piece of data is both expensive to
create, and does not need to be re-created multiple
times. Examples might include: database connection handles, large
arrays, and so on.
Consider a situation where each worker needs access to a large
but constant set of data. While this data set can be passed in to
the body of a PARFOR block, it does not persist there, and will
be transferred to each worker for each PARFOR block. For example:

largeData = generateLargeData( 5000 );
parfor ii = 1:20
x(ii) = someFcn( largeData );
parfor ii = 1:20
y(ii) = someFcn( largeData, x(ii) );

This could be simplified like so:

wrapper = WorkerObjWrapper( @generateLargeData, 5000 );
parfor ii = 1:20
x(ii) = someFcn( wrapper.Value );
parfor ii = 1:20
y(ii) = someFcn( wrapper.Value, x(ii) );

In that case, the function "generateLargeData" is evaluated only
once on each worker, and no large data is transferred from the
client to the workers. The large data is cleared from the workers
when the variable "wrapper" goes out of scope or is cleared on
the client.

Another example might be constructing a worker-specific
log-file. This can be achieved like so:

% build a function handle to open a numbered text file:
fcn = @() fopen( sprintf( 'worker_%d.txt', labindex ), 'wt' );

% opens the file handle on each worker, specifying that fclose
% will be used later to "clean up" the file handle created.
w = WorkerObjWrapper( fcn, {}, @fclose );

% Run a parfor loop, logging to disk which worker operated on which
% loop iterates
parfor ii=1:10
fprintf( w.Value, '%d\n', ii );

clear w; % causes "fclose(w.Value)" to be invoked on the workers
type worker_1.txt % see which iterates worker 1 got

Cite As

MathWorks Parallel Computing Toolbox Team (2021). Worker Object Wrapper (, MATLAB Central File Exchange. Retrieved .

Comments and Ratings (46)

Adam Peng

I tried to call two java objects with parfor. And I wrapped the java objects like the way down below. But I go an error:"No appropriate method, property, or field 'value' for class 'WorkerObjWrapper'.". And could you please kindly help me fix it? Thank you so much!!

dpath = 'thorup.jar';

aspl = zeros(num_v, num_v); %all shortest path length
np = de.unikiel.npr.thorup.util.ThorupTest.maincode;
wrapperObj_np = WorkerObjWrapper(np);

thorup = np.construct(competition_graph_edges.',num_v);
wrapperObj_thorup = WorkerObjWrapper(thorup);
parfor i=1:num_v
% calculate shortest path length from origin i-1 to all other nodes
np_main = wrapperObj_np.value;
% import thorup algorithm
%thorup = np_main.construct(competition_graph_edges.',num_v);
aspl(i,:) = np_main.calculate(i-1,wrapperObj_thorup,num_v);

Srinidhi Ganeshan


Edric Ellis

Hi Xiaolei - yes, either a cell array or struct would work fine for this.

Xiaolei TU

Hi, Guys, thanks for sharing the awesome idea. I want to know what should I do if I want to wrap a few arrays of different sizes and dimensions in my workspace. Should I first group them into cell or struct?

xu liang

Pengfei Zhao



Edric Ellis

Hi Maximilian - that's expected, each worker gets a full copy of the value. The wrapper just helps you build the value on the workers, and avoid building it or copying it across multiple times.

Maximilian Hoffmann

If I use the wrapper, and run the following code:


still the memory usage after running the second command is approx numCPU higher

Edric Ellis

Hi Stefano,

Hm, I'm not sure where the problem lies then. If you have code that reproduces the problem, you could post that over on MATLAB Answers...

Cheers, Edric.

Stefano C

Thank you very much Edric,

I don't think that is my case, since I open the parallel pool just once on the client checking whether it is already running:

% start parallel pool
p = gcp('nocreate'); % if no pool, do not create new one.
if isempty(p)
p = parpool('local', 1);
poolsize = p.NumWorkers; % get num workers

Do you have other suggestions?

Edric Ellis

Hi Stefano,

You can hit that error if you close and re-open the parallel pool, for example:

pool = parpool('local', 1);
w = WorkerObjWrapper(7);
pool = parpool('local', 1);
spmd, disp(w.Value), end

Is it possible that's the cause for you?

Stefano C

I’m trying to use the WorkerObjWrapper with perfeval but, despite having (I think) correctly created the WorkerObjWrapper,

[...] extract from main.m
% prepare Java object wrapper
wrapArgs = {wpath, fName, numFolds};
wrapperObj = WorkerObjWrapper( @generateObj, wrapArgs);

sent the whole object to the worker,

[...] extract from main.m
% run parallel elaboration
for f = 1 : N
% call Parallel Function Evaluation
results(f) = parfeval(p, @parFunc, 2, wpath, wrapperObj, N, f);
% collect the results
for f = 1 : N
[completedTask,value] = fetchNext(results);
fprintf('Got result #%d\n', completedTask);

picked out the .Value filed (i.e. the output of the wrapped function, in my case a struct) on the worker and called parfeval on the client,

[...] extract from parFunc.m
wrapObj = wrapperObj.Value;
data =;
rand = wrapObj.rand;
relation = wrapObj.relation;

I get the error:

Error in main (line 90)
[completedTask,value] = fetchNext(results);
Caused by:
Error using WorkerObjWrapper/get.Value (line 109)
Assertion failed.

May I kindly ask if anyone has in mind a possible reason for this error?

Many thanks in advance for this space, any help and your time

Edric Ellis

I don't see the problem when using R2015a. What OS are you using, and what cluster type? Do you always see this, or only sometimes?

Gianni Schena

I used extensively the Wrapper with old matlab versions without any problem.
With 2015a I get an issue that requires restarting a new matlab session :

Error using WorkerObjWrapper>(spmd) (line 155)
An error occurred during setup for SPMD execution. If this problem
persists, you may need to delete and start the parallel pool again
using parpool.

Error in WorkerObjWrapper/workerInit (line 155)
spmd, ?WorkerObjWrapper; end % workaround first-time
access problems

Error in WorkerObjWrapper (line 97)
WorkerObjWrapper.workerInit( tmpId, ctor, args, dtor );

Error in parallel_due_mat_script_with_restart_1 (line 130)
wrapper=WorkerObjWrapper(Sys),% share Sys to workers

Guanfeng Gao

I have a question. I want to use one cell variable in parfor loop and call the C code by mex file. It shows an error!!!! I know that the type of the Worker Object Wrapper output is already cell, so that the mex file could not work!!! Is there some simple way to solve that problem!!!

Edric Ellis

@Hugo that error means that your parallel pool basically crashed. What happens if you try simply



Hugo Lafaye

Hi, I have the following error running this code with MATLAB R2014a:

>> w=WorkerObjWrapper(trackMat);
Error using WorkerObjWrapper>(spmd) (line 156)
The parallel pool that SPMD was using has been shut down.

Error in WorkerObjWrapper/workerInit (line 156)

Error in WorkerObjWrapper (line 97)
WorkerObjWrapper.workerInit( tmpId, ctor, args, dtor );

The client lost connection to worker 4. This might be due to network
problems, or the interactive communicating job might have errored.

trackMat being a large matrix of single and w=WorkerObjWrapper(magic(5)) is working well.

Any ideas?

Hugo Lafaye

trackMat being a large matrix of single and w=WorkerObjWrapper(magic(5)) is working well.


Great file! This sped up my code a lot.

Edric Ellis

@Dan, try replacing line 136 with the following:

Map = containers.Map(uint32(0), [], 'UniformValues', false);

which is the R2009a syntax.


@Edric - Hi. Thank you for responding so quickly! I am running a fairly old version of MATLAB, MATLAB 7.8.0 (R2009a).

When I run:
>> containers.Map( 'KeyType', 'uint32', 'ValueType', 'any' )

I get the same error I got before:

??? No constructor 'containers.Map' with matching signature found.


Edric Ellis

@Dan - that error is definitely not expected. What version of MATLAB/PCT are you using? What happens if you try to execute

containers.Map('KeyType', 'uint32', 'ValueType', 'any');



I get the following error when I run the example code
w = WorkerObjWrapper( magic(5) );

??? No constructor 'containers.Map' with matching signature found.

How do I fix the error?



Joao Henriques

Works well, thanks!

It does consume a fair bit amount of memory though. Since "random access constant data" seems to be the most common model for many data-intensive applications, it would be great if Matlab somehow supports read-only shared memory across labs in the future.

Leah K

Worked perfectly for SQL JDBC connection. Thanks for your help Edric.

w = WorkerObjWrapper(@connJDBC, {'dbname','servername'}, @close);

function conn = connJDBC( databasename, servername )

s.DataReturnFormat = 'dataset';
s.NullNumberRead = 'NaN';
s.NullNumberWrite = 'NaN';
s.NullStringRead = 'null';
s.NullStringWrite = 'null';

conn = database(databasename,'','','',...
['jdbc:sqlserver://' servername ';database=' databasename ';integratedSecurity=true;']);

Edric Ellis

Hi Matt, FYI - I updated the version here to include that fix. Cheers, Edric.

Matt J

Hi Edric,

That seemed to work! Thanks.

Edric Ellis

Hi Matt, calling "clear" inside SPMD is not necessary - that's not the problem. Because of the way Composites work, when they go out of scope the workers don't find out immediately, only on the next SPMD block. If you add an SPMD block to your function, that's not sufficient since that happens before the Composites are created during the WorkerObjWrapper deletion.

Next suggestion: try adding the following line

val = []; dtor = []; valdtor = [];

as the last line inside the SPMD block in workerDelete.

Matt J

I meant to say, it is _peculiar_ that sending no command to the workers in an empty spmd block should have this effect.

Matt J

Hi Edric. An empty spmd block does clear them, but only from the command line. Putting it at the end of the function that created the wrapper object does no good. I also tried inserting an empty spmd block in the classdef where you recommended. That also didn't affect anything.

Is it desired behavior for an empty spmd block to clear the object? It is particular that sending no command to the workers forces a clear. Wouldn't it be better to enable

spmd, clear obj; end

Why wouldn't spmd

Edric Ellis

Hi Matt, with your latest reproduction put inside a function, I see that the workers hold on to the memory after the function returns, but an empty "spmd, end" block is sufficient to clear them. Do you not see that behaviour?

Note that the Composite causing the memory to be retained is one created in WorkerObjWrapper, not your code.

Matt J

Hi Edric,

No, even doing

spmd, clear w, end

doesn't release memory. Also, the problem is there even in a modification of my test when SPMD is not used it all (i.e., no Composites)



w = WorkerObjWrapper( @(a,b)zeros(a,b), {N^2,R});

parfor i=1:60


Edric Ellis

Hi Matt,

I see the same thing. The problem is that the memory for a Composite value is not getting cleaned up. The next "spmd" block will fix things, or you could try adding a line saying "spmd, end" at line 120 just after the call to workerDelete.

Cheers, Edric.

Matt J

I don't know if this is a platform dependent thing, but in the following code, I am not seeing the memory allocated by w.Value clear from the workers when w goes out of scope.

function test



w = WorkerObjWrapper( @(a,b)zeros(a,b)+labindex, {N^2,R});


parfor i=1:60



After running the above, the Task Manager shows about 3GB more memory usage. This is on a

Dell Precision T7500
Intel Xeon X5680 3.33Ghz
dual hexacore

Edric Ellis

Hi Joe, There's a bug in WorkerObjWrapper that I'll submit a fix for - but in the meantime you can try running

spmd, ?WorkerObjWrapper; end

before doing anything else and that should fix things for you.


Great file! It's improved the speed of the parfor loop in a simulation I've written by 30%.

One issue: the first time I run this in my program, I get the following error:

Error using WorkerObjWrapper>(spmd body) (line 161)
You cannot get the 'Map' property of 'WorkerObjWrapper'.
You cannot get the 'Map' property of 'WorkerObjWrapper'.

This error occurs once for every worker I have, and aborts my script. When I then run my script again (no changes), WorkerObjWrapper works wonderfully and no errors occur. I'm calling it in this form (simplified):

tsteps = 1:10000;
xpoints = 1:100;
for jj = tsteps
% omitting stuff here that generates a large matrix of kk=6 different quantities ("bigMatrix(ii,kk)") that evolves in time and needs to be checked for each time step at each x point.

bigMatrixWrapped = WorkerObjWrapper(bigMatrix);

parfor ii = xpoints
bigMatrixW = bigMatrixWrapped.Value;
results(ii,jj) = processResults(bigMatrixW(ii));

The error occurs on the line "bigMatrixWrapped = WorkerObjWrapper(bigMatrix);" so I'm not even sure that the structure of my parfor loop matters, unless for some reason WorkerObjWrapper doesn't like being called repeatedly. But then running the whole script a second time works, with no "Map" error (I'm not even sure what that means) and the expected results collected in the "results(ii,jj)" matrix.

Any ideas?


Thanks for the nice file!

However, I have tried to use it in my code in the following way and it resulted in an increase of the simulation running time compared to the non-parallelized version of the code.

I am dealing with large arrays and data files and have to calculate some statistics for a very large number of different cases.

In the initial non-parallelized version of my code, I calculate the statistics for case 1 to case 10^7 inside a for loop.

Non-parallelized Code:

... "read many files"
... "generate large arrays"

for i:1=10^7
... "Calculate statistics"

... "Write statistics in a text file"

In the parallelized version of the code, I use a PARFOR loop. However, I cannot have all the codes to calculate the statistics directly inside the PARFOR loop due to Matlab restrictions and errors. So, I had to create a new function called STAT and copy all the codes to calculate the statistics in this function.

Parallelized Code:

... "read many files"
... "generate large arrays"


parfor i:1=10^7

STAT(w1.Value, w2.Value, w3.Value, arg1, arg2, arg3, arg4);
..."slice STAT"


... "Write statistics in a text file"

The problem with my parallelized version of the code is that it takes much longer than the non-parallelized version. Inside the PARFOR loop, I have to call a function (STAT) and pass many large arrays (w1.Value) at each iteration.

Does anyone know what is the best way to optimize/parallelize this code?

Many thanks!

Matt J

That's a fair point Sebastian. My only thought is that, somehow, SPMD manages to make this kind of thing possible, via codistributed arrays, so I wonder why PARFOR cannot.


Maybe I'm wrong, but I'll try to explain better. Generally in a parfor loop called several times I have two data types. A constantly changing and one that is fixed over a parfor loop cycle.
For example:
a = rand (x2, 1);
b = rand (x2, 1);

for i1 = 1: x1

parfor i2 = 1: x2

a (i2) = fun1 (a (i2), b (i2));


a = fun2 (a, b);


It would be interesting that the sliced variable "b" to be persistent and that the sliced variable "a" the drive automatically by the parfor.
It is now certain that the remaining portion of b stored in each worker corresponds to the part that is sent from "a" to the worker in each call to parfor loop?
That is my question.

Sorry my english.

Matt J

Hi Sebastian,

Not sure what you meant by "distributed the data in different way in each parfor call" or how it was relevant to my Comment. I was asking for a way to make the data slicing persist across parfor calls. This implies that the data would be distributed the same way in each parfor, not in different ways.


Hi Matt J.
Good idea, but I think it is difficult in the parfor loop if it distributed the data in different way in each parfor call. As far as I know, in the parfor isn't defined what information receive each worker.

Matt J

Hi Edric,

Looks very useful, but your example showing how data can be made to persist across 2 subsequent parfor loops seems restricted to unsliced data. I was wondering if the class would allow sliced data to persist as well (without needing to be resliced).

Michael Völker

Christopher Kanan

MATLAB Release Compatibility
Created with R2013b
Compatible with any release
Platform Compatibility
Windows macOS Linux

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!