Create parallel pool on cluster




parpool enables the full functionality of the parallel language features (parfor and spmd) in MATLAB® by creating a special job on a pool of workers, and connecting the MATLAB client to the parallel pool.

parpool starts a pool using the default cluster profile, with the pool size specified by your parallel preferences and the default profile.


parpool(poolsize) overrides the number of workers specified in the preferences or profile, and starts a pool of exactly that number of workers, even if it has to wait for them to be available. Most clusters have a maximum number of workers they can start. If the profile specifies a MATLAB job scheduler (MJS) cluster, parpool reserves its workers from among those already running and available under that MJS. If the profile specifies a local or third-party scheduler, parpool instructs the scheduler to start the workers for the pool.


parpool(profilename) or parpool(profilename,poolsize) starts a worker pool using the cluster profile identified by profilename.


parpool(cluster) or parpool(cluster,poolsize) starts a worker pool on the cluster specified by the cluster object cluster.


parpool(___,Name,Value) applies the specified values for certain properties when starting the pool.


poolobj = parpool(___) returns a parallel.Pool object to the client workspace representing the pool on the cluster. You can use the pool object to programmatically delete the pool or to access its properties.


collapse all

Create Pool from Default Profile

Start a parallel pool using the default profile to define the number of workers.


Create Pool from Specified Profile

Start a parallel pool of 16 workers using a profile called myProf.


Create Pool from Local Profile

Start a parallel pool of 2 workers using the local profile.


Create Pool on Specified Cluster

Create an object representing the cluster identified by the default profile, and use that cluster object to start a parallel pool. The pool size is determined by the default profile.

c = parcluster

Create Pool and Attach Files

Start a parallel pool with the default profile, and pass two code files to the workers.


Return Pool Object and Delete Pool

Create a parallel pool with the default profile, and later delete the pool.

poolobj = parpool;


Determine Size of Current Pool

Find the number of workers in the current parallel pool.

poolobj = gcp('nocreate'); % If no pool, do not create new one.
if isempty(poolobj)
    poolsize = 0;
    poolsize = poolobj.NumWorkers

Input Arguments

collapse all

poolsize — Size of parallel poolset in parallel preferences or parallel profile (default)

Size of the parallel pool, specified as a numeric value.

Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64

profilename — Profile that defines cluster and propertiesstring

Profile that defines cluster and properties, specified as a string.


Data Types: char

cluster — Cluster to start pool oncluster object

Cluster to start pool on, specified as a cluster object

Example: c = parcluster();

Name-Value Pair Arguments

Specify optional comma-separated pairs of Name,Value arguments. Name is the argument name and Value is the corresponding value. Name must appear inside single quotes (' '). You can specify several name and value pair arguments in any order as Name1,Value1,...,NameN,ValueN.

Example: 'AttachedFiles',{'myFun.m'}

'AttachedFiles' — Files to attach to poolstring or cell array of strings

Files to attach to pool, specified as a string or cell array of strings.

With this argument pair, parpool starts a parallel pool and passes the identified files to the workers in the pool. The files specified here are appended to the AttachedFiles property specified in the applicable parallel profile to form the complete list of attached files. The 'AttachedFiles' property name is case sensitive, and must appear as shown.

Example: {'myFun.m','myFun2.m'}

Data Types: char | cell

'SpmdEnabled' — Indication if pool is enabled to support SPMDtrue (default) | false

Indication if pool is enabled to support SPMD, specified as a logical. You can disable support only on a local or MJS cluster. Because parfor iterations do not involve interworker communication, disabling SPMD support this way allows the parallel pool to keep evaluating a parfor-loop even if one or more workers aborts during loop execution.

Data Types: logical

Output Arguments

collapse all

poolobj — Access to parallel pool from clientparallel.Pool object

Access to parallel pool from client, returned as a parallel.Pool object.

More About

collapse all


  • The pool status indicator in the lower-left corner of the desktop shows the client session connection to the pool and the pool status. Click the icon for a menu of supported pool actions.

    With a pool running: With no pool running:

  • If you set your parallel preferences to automatically create a parallel pool when necessary, you do not need to explicitly call the parpool command. You might explicitly create a pool to control when you incur the overhead time of setting it up, so the pool is ready for subsequent parallel language constructs.

  • delete(poolobj) shuts down the parallel pool. Without a parallel pool, spmd and parfor run as a single thread in the client, unless your parallel preferences are set to automatically start a parallel pool for them.

  • When you use the MATLAB editor to update files on the client that are attached to a parallel pool, those updates automatically propagate to the workers in the pool. (This automatic updating does not apply to Simulink® model files. To propagate updated model files to the workers, use the updateAttachedFiles function.)

  • When connected to a parallel pool, the following commands entered in the client Command Window also execute on all the workers in the pool:

    This behavior allows you to set the working folder and the command search path on all the workers, so that subsequent parfor-loops execute in the proper context.

    If any of these commands does not work on the client, it is not executed on the workers either. For example, if addpath specifies a folder that the client cannot access, the addpath command is not executed on the workers. However, if the working directory or path can be set on the client, but cannot be set as specified on any of the workers, you do not get an error message returned to the client Command Window.

    This slight difference in behavior might be an issue in a mixed-platform environment where the client is not the same platform as the workers, where folders local to or mapped from the client are not available in the same way to the workers, or where folders are in a nonshared file system. For example, if you have a MATLAB client running on a Microsoft® Windows® operating system while the MATLAB workers are all running on Linux® operating systems, the same argument to addpath cannot work on both. In this situation, you can use the function pctRunOnAll to assure that a command runs on all the workers.

    Another difference between client and workers is that any addpath arguments that are part of the matlabroot folder are not set on the workers. The assumption is that the MATLAB install base is already included in the workers' paths. The rules for addpath regarding workers in the pool are:

    • Subfolders of the matlabroot folder are not sent to the workers.

    • Any folders that appear before the first occurrence of a matlabroot folder are added to the top of the path on the workers.

    • Any folders that appear after the first occurrence of a matlabroot folder are added after the matlabroot group of folders on the workers' paths.

    For example, suppose that matlabroot on the client is C:\Applications\matlab\. With an open parallel pool, execute the following to set the path on the client and all workers:


    Because T3, T4, and T6 are subfolders of matlabroot, they are not set on the workers' paths. So on the workers, the pertinent part of the path resulting from this command is:

    <worker original matlabroot folders...>
Was this topic helpful?