Create parallel pool on cluster
poolobj = parpool(___)
parpool starts a parallel pool of workers using the default cluster
profile, with the pool size specified by your parallel preferences and the
parpool creates a pool on the default
cluster with its
NumWorkers in the range
preferredNumWorkers] for running parallel language features.
preferredNumWorkers is the value defined in your parallel
parpool enables the full functionality of the parallel
language features in MATLAB® by creating a special job on a pool of workers, and connecting the
MATLAB client to the parallel pool. Parallel language features include
distributed. If possible, the working folder on the
workers is set to match that of the MATLAB client session.
parpool( creates and returns a pool with
the specified number of workers.
poolsize can be a positive
integer or a range specified as a 2-element vector of integers. If
poolsize is a range, the resulting pool has size as
large as possible in the range requested.
poolsize overrides the number of workers
specified in the preferences or profile, and starts a pool of exactly that
number of workers, even if it has to wait for them to be available. Most
clusters have a maximum number of workers they can start. If the profile
specifies a MATLAB job scheduler (MJS) cluster,
reserves its workers from among those already running and available under that
MJS. If the profile specifies a local or third-party scheduler,
parpool instructs the scheduler to start the workers for
the specified values for certain properties when starting the pool.
Start a parallel pool using the default profile to define the number of workers.
Start a parallel pool of 16 workers using a
Start a parallel pool of 2 workers using the local profile.
Create an object representing the cluster identified by the default profile, and use that cluster object to start a parallel pool. The pool size is determined by the default profile.
c = parcluster parpool(c)
Start a parallel pool with the default profile, and pass two code files to the workers.
Create a parallel pool with the default profile, and later delete the pool.
poolobj = parpool; delete(poolobj)
Find the number of workers in the current parallel pool.
poolobj = gcp('nocreate'); % If no pool, do not create new one. if isempty(poolobj) poolsize = 0; else poolsize = poolobj.NumWorkers end
poolsize— Size of parallel pool
Size of the parallel pool, specified as a positive integer or a range specified as a 2-element
vector of integers. If
poolsize is a range, the
resulting pool has size as large as possible in the range requested. Set
the default preferred number of workers in the parallel preferences or
profilename— Profile that defines cluster and properties
Profile that defines cluster and properties, specified as a character vector.
cluster— Cluster to start pool on
Cluster to start pool on, specified as a cluster object. Use
parcluster to get a
c = parcluster; parpool(c)
comma-separated pairs of
the argument name and
Value is the corresponding value.
Name must appear inside quotes. You can specify several name and value
pair arguments in any order as
'AttachedFiles'— Files to attach to pool
Files to attach to pool, specified as a character vector, string or string array, or cell array of character vectors.
With this argument pair,
parpool starts a
parallel pool and passes the identified files to the workers in the
pool. The files specified here are appended to the
property specified in the applicable parallel profile to form the
complete list of attached files. The
'AttachedFiles' property name is case
sensitive, and must appear as shown.
'AutoAddClientPath'— Specifies if client path is added to worker path
A logical value (
false) that controls whether
user-added-entries on the client's path are added to each worker's
path at startup. By default '
is set to
'EnvironmentVariables'— Environment variables copied to workers
Names of environment variables to copy from the client session to
the workers, specified as a character vector, string or string
array, or cell array of character vectors. The names specified here
are appended to the
property specified in the applicable parallel profile to form the
complete list of environment variables. Any variables listed which
are not set are not copied to the workers. These environment
variables are set on the workers for the duration of the parallel
'SpmdEnabled'— Indication if pool is enabled to support SPMD
Indication if pool is enabled to support SPMD, specified as a
logical. You can disable support only on a local or MJS cluster.
parfor iterations do not involve
interworker communication, disabling SPMD support this way allows
the parallel pool to keep evaluating a
parfor-loop even if one or more workers aborts
during loop execution.
startup.m from your MATLAB path if you want to run any parallel
parpool. If you have trouble starting the
parallel pool, see this MATLAB Answers
The pool status indicator in the lower-left corner of the desktop shows the client session connection to the pool and the pool status. Click the icon for a menu of supported pool actions.
With a pool running: With no pool running:
If you set your parallel preferences to automatically
create a parallel pool when necessary, you do not need to explicitly
parpool command. You might explicitly
create a pool to control when you incur the overhead time of setting
it up, so the pool is ready for subsequent parallel language constructs.
delete(poolobj) shuts down the
parallel pool. Without a parallel pool,
as a single thread in the client, unless your parallel preferences
are set to automatically start a parallel pool for them.
When you use the MATLAB editor to update files
on the client that are attached to a parallel pool, those updates
automatically propagate to the workers in the pool. (This automatic
updating does not apply to Simulink® model files. To propagate
updated model files to the workers, use the
If possible, the working folder on the workers is initially set to match that of the MATLAB client session. Subsequently, the following commands entered in the client Command Window also execute on all the workers in the pool:
This behavior allows you to set the working folder and the command
search path on all the workers, so that subsequent pool activities
parfor-loops execute in the proper context.
When changing folders or adding a path with
clients with Windows® operating systems, the value sent to the
workers is the UNC path for the folder if possible. For clients with Linux® operating
systems, it is the absolute folder location.
If any of these commands does not work on the client, it is
not executed on the workers either. For example, if
a folder that the client cannot access, the
is not executed on the workers. However, if the working folder can
be set on the client, but cannot be set as specified on any of the
workers, you do not get an error message returned to the client Command
Be careful of this slight difference in behavior in a mixed-platform
environment where the client is not the same platform as the workers,
where folders local to or mapped from the client are not available
in the same way to the workers, or where folders are in a nonshared
file system. For example, if you have a MATLAB client running
on a Microsoft® Windows operating system while the MATLAB workers
are all running on Linux operating systems, the same argument
addpath cannot work on both. In this situation,
you can use the function
assure that a command runs on all the workers.
Another difference between client and workers is that any
that are part of the
are not set on the workers. The assumption is that the MATLAB install
base is already included in the workers’ paths. The rules for
workers in the pool are:
Subfolders of the
are not sent to the workers.
Any folders that appear before the first occurrence
matlabroot folder are added to the top of
the path on the workers.
Any folders that appear after the first occurrence
matlabroot folder are added after the
of folders on the workers’ paths.
For example, suppose that
matlabroot on the
C:\Applications\matlab\. With an open
parallel pool, execute the following to set the path on the client
and all workers:
addpath('P1', 'P2', 'C:\Applications\matlab\T3', 'C:\Applications\matlab\T4', 'P5', 'C:\Applications\matlab\T6', 'P7', 'P8');
matlabroot, they are not set on the
workers’ paths. So on the workers, the pertinent part of the
path resulting from this command is:
P1 P2 <worker original matlabroot folders...> P5 P7 P8