MATLAB Answers

mangood UK

convert the weight s of neural from float "double" to fixed ponit

Asked by mangood UK
on 30 Jan 2013


i try to convert the weights of my neural networks from float point "double" to

2'complement fixed point representation with 2 bit for real and 14 bit for

fraction so can i use them in my FPGA

what is the best fun that do this work in matlab??

and what is rounding method that this fun use

thank in advance



Log in to comment.


1 Answer

Discover MakerZone

MATLAB and Simulink resources for Arduino, LEGO, and Raspberry Pi

Learn more

Discover what MATLAB® can do for your career.

Opportunities for recent engineering grads.

Apply Today

MATLAB Academy

New to MATLAB?

Learn MATLAB today!