I had to do an FP8 adder as my final project for my FPGA lab. It was at least a full page of state machine. And I write small. I ended up just not doing the rounding and truncating instead because I was so done with it.
Consider me educated on the mantissa. That's a nifty pedantry.
For the leading 0 counter, I've found it's even better for the tool if I use a for loop. Keeps things scalable and it's even less code. I'm not understanding this takeaway though
> Sometimes a good design is about more than just performance.
The good design (unless the author's note that it's easier to read and maintain makes it worse design?) was better performing. So sometimes a good design creates performance.
Likewise for pipelining: it would have been interesting to know if the tools can reorder operations if you give them some pipeline registers to play with. In Xilinx land it'll move your logic around and utilize the flops as it sees fit.
I think the thing that truly scares me about floating point is not IEEE-754 or even the weird custom floating points that people come up with but the horrible, horrible things that some people think passes for a floating point implementation in hardware, who think that things like infinities or precision in the last place are kind of overrated.
And anyone implementing numerical algorithms is thankful for the tremendous amount of thought put into the fp spec. The complexity is worth it and makes the code much safer.
I had to do an FP8 adder as my final project for my FPGA lab. It was at least a full page of state machine. And I write small. I ended up just not doing the rounding and truncating instead because I was so done with it.
Consider me educated on the mantissa. That's a nifty pedantry.
Typo in the c++?
> cout << "x =/= x: " << ((x=x) ? "true" : "false") << endl;
Should be x != x?
For the leading 0 counter, I've found it's even better for the tool if I use a for loop. Keeps things scalable and it's even less code. I'm not understanding this takeaway though
> Sometimes a good design is about more than just performance.
The good design (unless the author's note that it's easier to read and maintain makes it worse design?) was better performing. So sometimes a good design creates performance.
Likewise for pipelining: it would have been interesting to know if the tools can reorder operations if you give them some pipeline registers to play with. In Xilinx land it'll move your logic around and utilize the flops as it sees fit.
I think the thing that truly scares me about floating point is not IEEE-754 or even the weird custom floating points that people come up with but the horrible, horrible things that some people think passes for a floating point implementation in hardware, who think that things like infinities or precision in the last place are kind of overrated.
Everyone who has ever had to build a floating point unit has hated it with a passion, I've watched in done from afar, and done it myself
And anyone implementing numerical algorithms is thankful for the tremendous amount of thought put into the fp spec. The complexity is worth it and makes the code much safer.