This is machine translation

Translated by Microsoft
Mouseover text to see original. Click the button below to return to the English version of the page.

Note: This page has been translated by MathWorks. Click here to see
To view all translated materials including this page, select Country from the country navigator on the bottom of this page.

Simulation Acceleration Using GPUs

Turbo, LDPC, Viterbi, Convolutional Coding

Communications Toolbox™ includes GPU-based System objects that can target execution of the algorithm on a Graphical Processing Unit (GPU) rather than on a CPU. Using the GPU-based System objects and your GPU, you can accelerate your simulation.


comm.gpu.AWGNChannelAdd white Gaussian noise to input signal with GPU
comm.gpu.BlockDeinterleaverRestore original ordering of block interleaved sequence with GPU
comm.gpu.BlockInterleaverCreate block interleaved sequence with GPU
comm.gpu.ConvolutionalDeinterleaverRestore ordering of symbols using shift registers with GPU
comm.gpu.ConvolutionalEncoderConvolutionally encode binary data with GPU
comm.gpu.ConvolutionalInterleaverPermute input symbols using shift registers with GPU
comm.gpu.LDPCDecoderDecode binary low-density parity-check data with GPU
comm.gpu.PSKDemodulatorDemodulate using M-ary PSK method with GPU
comm.gpu.PSKModulatorModulate using M-ary PSK method with GPU
comm.gpu.TurboDecoderDecode input signal using parallel concatenation decoding with GPU
comm.gpu.ViterbiDecoderDecode convolutionally encoded data using Viterbi algorithm with GPU


Simulation Acceleration Using GPUs

GPU-based System objects, Guidelines for Using GPUs

Featured Examples