MATLAB Answers

mangood UK

convert the weight s of neural from float "double" to fixed ponit

Asked by mangood UK
on 30 Jan 2013


i try to convert the weights of my neural networks from float point "double" to

2'complement fixed point representation with 2 bit for real and 14 bit for

fraction so can i use them in my FPGA

what is the best fun that do this work in matlab??

and what is rounding method that this fun use

thank in advance




1 Answer

Join the 15-year community celebration.

Play games and win prizes!

Learn more
Discover MakerZone

MATLAB and Simulink resources for Arduino, LEGO, and Raspberry Pi

Learn more

Discover what MATLAB® can do for your career.

Opportunities for recent engineering grads.

Apply Today

MATLAB Academy

New to MATLAB?

Learn MATLAB today!