i try to convert the weights of my neural networks from float point "double" to
2'complement fixed point representation with 2 bit for real and 14 bit for
fraction so can i use them in my FPGA
what is the best fun that do this work in matlab??
and what is rounding method that this fun use
thank in advance