Main Content

Projection

Project network layers using principal component analysis (PCA); reduce number of learnable parameters

Project layers by performing principal component analysis (PCA) on the layer activations using a data set representative of the training data and applying linear projections on the layer learnable parameters. Forward passes of a projected deep neural network are typically faster when you deploy the network to embedded hardware using library-free C/C++ code generation.

For a detailed overview of the compression techniques available in Deep Learning Toolbox™ Model Compression Library, see Reduce Memory Footprint of Deep Neural Networks.

Simplified illustration of projection. On the left is a sketch of a neural network with two layers that consist of three and two neurons, respectively. Every neuron in the first layer is connected to every neuron in the second layer. An arrow points to a second sketch on the right, which shows a different model that consists of three layers, with three, one, and two neurons, respectively. The right network has fewer weights in total compared to the left.

Functions

compressNetworkUsingProjectionCompress neural network using projection (Since R2022b)
neuronPCAPrincipal component analysis of neuron activations (Since R2022b)
unpackProjectedLayersUnpack projected layers of neural network (Since R2023b)
ProjectedLayerCompressed neural network layer using projection (Since R2023b)
gruProjectedLayerGated recurrent unit (GRU) projected layer for recurrent neural network (RNN) (Since R2023b)
lstmProjectedLayerLong short-term memory (LSTM) projected layer for recurrent neural network (RNN) (Since R2022b)

Topics

Featured Examples