Parallel Computing with MATLAB

Prerequisites

MATLAB® Fundamentals, or equivalent experience using MATLAB

View details

Day 1 of 2
Improving performance

Objective: This section introduces a parallel approach to running MATLAB code through the use of multiple MATLAB sessions. Interactive techniques for prototyping in a parallel environment are highlighted. The concepts in this section also introduce several ideas explored throughout the course.

  • Evaluating performance
  • Distributing code
  • Additional MATLAB sessions
  • Parallel for-loops
  • Measuring speedup
  • Hardware utilization
Speeding up Computations

Objective: This section outlines the key steps for running parallel computations in a batch environment. The emphasis is on interacting with the various Parallel Computing Toolbox objects to create and run jobs that run in batch.

  • Cluster profiles
  • Job creation
  • Task creation
  • Job submission
  • Retrieving results
Task-Parallel Programming

Objective: This section identifies important considerations for programming task-parallel jobs including decomposing a problem and partitioning input. Through use of a hands-on example, it also explores various techniques typically employed to achieve speedup.

  • Decomposing parallel problems
  • Input partitioning
  • Function call aggregation
  • Load balancing
  • Considerations for parfor-loops
  • Variable classification in parfor-loops
  • Task-parallel and data-parallel applications
Day 2 of 2
Working with Large Data Sets

Objective: This section explores working with arrays in a parallel environment, with an emphasis on parallel algorithms. Splitting large datasets across multiple instances of MATLAB, as well as simultaneously performing the same operation on the various portions, will be key themes. This chapter concludes by running prototyped code in a batch parallel job.

  • Single Program, Multiple Data (SPMD) model
  • Replicated, variant, and private variable classifications
  • Distributed arrays
  • Composite arrays
  • Data-parallel functions
  • Codistributed array creation
  • Codistributed array indexing
Data-Parallel Programming

Objective: This section explores the important programming considerations for parallel jobs. In addition, this section introduces using the communication features in parallel jobs for creating special architectures to solve specific types of parallel problems.

  • Partitioning data
  • Parallel topology
  • Sending and receiving data
  • Submitting communicating jobs
  • Collective communication
  • Synchronization
  • Deadlocks
  • Global operations
Increasing Scale with Multiple Systems

Objective: This section demonstrates tools for harnessing the power of multiple systems on a network for running code. Highlighted in the chapter are techniques for working with a heterogeneous mix of systems, and special features that are available to a cluster of computer systems.

  • System components
  • Connecting to remote clusters
  • Dynamic licensing
  • Attaching files
  • Specifying paths
  • Development and debug workflow
  • Application scalability