Main Content

sinusoidalPositionEncodingLayer

Sinusoidal position encoding layer

Since R2023b

    Description

    A sinusoidal position encoding layer maps position indices to vectors using sinusoidal operations. Use this layer in transformer neural networks to provide information about the position of the data in a sequence or image.

    Creation

    Description

    example

    layer = sinusoidalPositionEncodingLayer(outputSize) creates a sinusoidal position encoding layer and sets the OutputSize property.

    example

    layer = sinusoidalPositionEncodingLayer(outputSize,Name=Value) creates a sinusoidal position encoding layer and sets the Positions and Name properties using one or more name-value arguments.

    Properties

    expand all

    Sinusoidal Position Encoding

    This property is read-only.

    Number of channels in the layer output, specified as an even positive integer.

    Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64

    This property is read-only.

    Positions in the input, specified as one of these values:

    • "auto" — For sequence or spatial-temporal input, use the temporal indices as positions, which is equivalent to using "temporal-indices". For one-dimensional image input, use the spatial indices as positions, which is equivalent to using "spatial-indices". For other input, use the input values as positions, which is equivalent to using "data-values".

    • "temporal-indices" — Use the temporal indices of the input as positions.

    • "spatial-indices" — Use the spatial indices of the input as positions.

    • "data-values" — Use the values in the input as positions.

    Layer

    Layer name, specified as a character vector or a string scalar. For Layer array input, the trainnet and dlnetwork functions automatically assign names to layers with the name "".

    The SinusoidalPositionEncodingLayer object stores this property as a character vector.

    Data Types: char | string

    This property is read-only.

    Number of inputs to the layer, returned as 1. This layer accepts a single input only.

    Data Types: double

    This property is read-only.

    Input names, returned as {'in'}. This layer accepts a single input only.

    Data Types: cell

    This property is read-only.

    Number of outputs from the layer, returned as 1. This layer has a single output only.

    Data Types: double

    This property is read-only.

    Output names, returned as {'out'}. This layer has a single output only.

    Data Types: cell

    Examples

    collapse all

    Create a sinusoidal position encoding layer with an output size of 300.

    layer = sinusoidalPositionEncodingLayer(300)
    layer = 
      SinusoidalPositionEncodingLayer with properties:
    
              Name: ''
        OutputSize: 300
         Positions: 'auto'
    
       Learnable Parameters
        No properties.
    
       State Parameters
        No properties.
    
    Use properties method to see a list of all properties.
    
    

    Create a neural network containing a sinusoidal position encoding layer.

    net = dlnetwork;
    
    numChannels = 1;
    
    embeddingOutputSize = 64;
    numWords = 128;
    maxPosition = 128;
    
    numHeads = 4;
    numKeyChannels = 4*embeddingOutputSize;
    
    layers = [ 
        sequenceInputLayer(numChannels,Name="input")
        wordEmbeddingLayer(embeddingOutputSize,numWords,Name="word-emb")
        sinusoidalPositionEncodingLayer(embeddingOutputSize,Name="pos-enc");
        additionLayer(2,Name="add")
        selfAttentionLayer(numHeads,numKeyChannels,AttentionMask="causal")
        fullyConnectedLayer(numWords)
        softmaxLayer];
    
    net = addLayers(net,layers);
    
    net = connectLayers(net,"word-emb","add/in2");

    View the neural network architecture.

    plot(net)
    axis off
    box off

    Algorithms

    expand all

    References

    [1] Vaswani, Ashish, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, and Illia Polosukhin. "Attention is all you need." In Advances in Neural Information Processing Systems, Vol. 30. Curran Associates, Inc., 2017. https://papers.nips.cc/paper/7181-attention-is-all-you-need.

    Version History

    Introduced in R2023b