Package: comm
Decode binary lowdensity paritycheck data with GPU
The GPU LDPCDecoder
object decodes a binary
lowdensity paritycheck code using a graphics processing unit (GPU).
Note: To use this object, you must install a Parallel Computing Toolbox™ license and have access to an appropriate GPU. For more about GPUs, see GPU Computing in the Parallel Computing Toolbox documentation. 
A GPUbased System object™ accepts typical MATLAB^{®} arrays or objects that you create using the gpuArray class as an input. GPUbased System objects support input signals with double or singleprecision data types. The output signal inherits its datatype from the input signal.
If the input signal is a MATLAB array, then the output signal is also a MATLAB array. In this case, the System object handles data transfer between the CPU and GPU.
If the input signal is a gpuArray, then the output signal is also a gpuArray. In this case, the data remains on the GPU. Therefore, when the object is given a gpuArray, calculations take place entirely on the GPU and no data transfer occurs. Passing gpuArray arguments provides increased performance by reducing simulation time. For more information, see Establish Arrays on a GPU in the Parallel Computing Toolbox documentation.
To decode a binary lowdensity paritycheck code:
Define and set up your binary lowdensity paritycheck decoder object. See Construction.
Call step
to decode a binary lowdensity
paritycheck code according to the properties of comm.gpu.LDPCDecoder
.
The behavior of step
is specific to each object in
the toolbox.
Note:
Starting in R2016b, instead of using the 
h = comm.gpu.LDPCDecoder
creates a GPUbased
LDPC binary lowdensity paritycheck decoder object, h
.
This object performs LDPC decoding based on the specified paritycheck
matrix. The object does not assume any patterns in the paritycheck
matrix.
h = comm.gpu.LDPCDecoder(
creates
a GPUbased LDPC decoder object, 'PropertyName'
,'ValueName'
)h
, with each
specified property set to the specified value. You can specify additional
namevalue pair arguments in any order as ('PropertyName1'
,'PropertyValue1'
,...,'PropertyNameN'
,'PropertyValueN'
).
h = comm.gpu.LDPCDecoder(PARITY)
creates
a GPUbased LDPC decoder object, h
, with the ParityCheckMatrix
property
set to PARITY
.

Paritycheck matrix Specify the paritycheck matrix as a binary valued sparse matrix with dimension (NbyK) by N, where N > K > 0. The last N−K columns in the parity check matrix must be an invertible matrix in GF(2). This property accepts numeric or logical data types. The upper bound for the value of N is (2^{31})1. The default is the paritycheck matrix of the halfrate LDPC code from the DVBS.2 standard, which is the result of dvbs2ldpc(1/2). 

Select output value format Specify the output value format as one of 

Decision method Specify the decision method used for decoding as one of 

Maximum number of decoding iterations Specify the maximum number of iterations the object uses as
an integer valued numeric scalar. The default is 

Condition for iteration termination Specify the condition to stop the decoding iterations as one
of 

Output number of iterations performed Set this property to true to output the actual number of iterations the object performed. The default is false. 

Output final parity checks Set this property to true to output the final parity checks the object calculated. The default is false. 
clone  Create GPU LDPC Decoder object with same property values 
isLocked  Locked status for input attributes and nontunable properties 
release  Allow property value and input characteristics changes 
step  Decode input signal using LDPC decoding scheme 
The GPU LDPC Decoder
System object uses
the same algorithm as the LDPC Decoder block.
See Decoding Algorithm for details.
Transmit an LDPCencoded, QPSKmodulated bit stream through an AWGN channel, then demodulate, decode, and count errors.
hEnc = comm.LDPCEncoder; hMod = comm.PSKModulator(4, 'BitInput',true); hChan = comm.AWGNChannel(... 'NoiseMethod','Signal to noise ratio (SNR)','SNR',1); hDemod = comm.PSKDemodulator(4, 'BitOutput',true,... 'DecisionMethod','Approximate loglikelihood ratio', ... 'Variance', 1/10^(hChan.SNR/10)); hDec = comm.gpu.LDPCDecoder; hError = comm.ErrorRate; for counter = 1:10 data = logical(randi([0 1], 32400, 1)); encodedData = step(hEnc, data); modSignal = step(hMod, encodedData); receivedSignal = step(hChan, modSignal); demodSignal = step(hDemod, receivedSignal); receivedBits = step(hDec, demodSignal); errorStats = step(hError, data, receivedBits); end fprintf('Error rate = %1.2f\nNumber of errors = %d\n', ... errorStats(1), errorStats(2))