convert the weight s of neural from float "double" to fixed ponit

4 views (last 30 days)
hi
i try to convert the weights of my neural networks from float point "double" to
2'complement fixed point representation with 2 bit for real and 14 bit for
fraction so can i use them in my FPGA
what is the best fun that do this work in matlab??
and what is rounding method that this fun use
thank in advance
s.s

Answers (2)

Jamie Haas
Jamie Haas on 23 Feb 2017
The Neural Network toolbox can be used to create a trained neural network object: 'net'
The coefficients are all double precision floating point. That won't work in the world of fixed point hardware implementations!
The following object values would need to be converted (this is an example of a single input and hidden layer size of 5):
To programmatically convert the coefficients to fixed point, the following may be added to the script. To convert the IW (Input weights) perform the following with fixed point resolution changed according to your requirements:
% Input layer weights for a trained network using the NN toolbox
net.IW{1,1}
% set the weights to a temporary variable
net_IW_1_1 = fi(net.IW{1,1}, 1, 12, 7);
% convert the temporary variable object to a vector of doubles
% note: we need to keep the double representation BUT it has fixed point precision now
net_IW_1_1 = net_IW_1_1.double;
% now the double vector can be used to set the new object values
net.IW{1,1} = net_IW_1_1;
% Viola!
net.IW{1,1}
Do this for all weights in a script, then you can generate your "myNeuralNetwork" function which will have fixed point precision weights.
Whichever function you used for the neuron (i.e. sigmoid) will also need to be converted to fixed point. I simply take the sigmoid output and apply the same fi(...) as above.

Shashank Prasanna
Shashank Prasanna on 30 Jan 2013

Products

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!