mapreduce, on Spark® and Hadoop® clusters
You can use Parallel
Computing Toolbox™ to distribute large
arrays in parallel across multiple MATLAB® workers, so that you
can run big-data applications that use the combined memory of your
Computing Toolbox also enables you to execute MATLAB® tall
datastore calculations in parallel,
so that you can analyze big data sets that do not fit in the memory
of your cluster. You can use MATLAB Parallel Server™ to
run tall array and
datastore calculations in
parallel on Spark enabled Hadoop clusters. Doing so significantly
reduces the execution time of very large data calculations.
mapreduceon Spark and Hadoop clusters, and parallel pools.