This is machine translation

Translated by Microsoft
Mouseover text to see original. Click the button below to return to the English verison of the page.

Note: This page has been translated by MathWorks. Please click here
To view all translated materals including this page, select Japan from the country navigator on the bottom of this page.


Location of MPI implementation


[primaryLib,extras] = mpiLibConf



MPI implementation library used by a communicating job.


Cell array of other required library names.


[primaryLib,extras] = mpiLibConf returns the MPI implementation library to be used by a communicating job. primaryLib is the name of the shared library file containing the MPI entry points. extras is a cell array of other library names required by the MPI library.

To supply an alternative MPI implementation, create a file named mpiLibConf.m, and place it on the MATLAB® path. The recommended location is matlabroot/toolbox/distcomp/user. Your mpiLibConf.m file must be higher on the cluster workers' path than matlabroot/toolbox/distcomp/mpi. (Sending mpiLibConf.m as a file dependency for this purpose does not work.) After your mpiLibConf.m file is in place, update the toolbox path caching with the following command in MATLAB:

rehash toolboxcache


Use the mpiLibConf function to view the current MPI implementation library:



Under all circumstances, the MPI library must support all MPI-1 functions. Additionally, the MPI library must support null arguments to MPI_Init as defined in section 4.2 of the MPI-2 standard. The library must also use an mpi.h header file that is fully compatible with MPICH2.

When used with the MATLAB job scheduler or the local cluster, the MPI library must support the following additional MPI-2 functions:

  • MPI_Open_port

  • MPI_Comm_accept

  • MPI_Comm_connect

When used with any third-party scheduler, it is important to launch the workers using the version of mpiexec corresponding to the MPI library being used. Also, you might need to launch the corresponding process management daemons on the cluster before invoking mpiexec.

See Also


Introduced before R2006a

Was this topic helpful?