On Tuesday, August 18, 2020 at 1:44:10 PM UTC-7, gtwrek wrote:
(snip, I wrote)
Post by gtwrekPost by ***@u.washington.eduMany algorithms are much better done in fixed point, but teaching it
seems to be a lost art. Part of the reason is that most high-level languages
don't make it easy to do.
(snip)
Post by gtwrekPost by ***@u.washington.eduMaybe some will have some good counterexamples. More DSP chips support
floating point, and it isn't all that hard to do now. But often enough, fixed
point is the right choice.
"Doing DSP" in FPGAs (more and more common) is almost exclusively fixed
point. Floating point in FPGAs makes almost no sense at all. Even
though FPGAs vendors keep offering more and more "High-level" tools that
make floating point more easily accessable within FPGAs, it's mostly
(drumroll...) pointless.
It does seem that there are some people doing floating point in FPGAs,
and for some scientific (non-DSP) problems it might be useful.
It isn't so bad, until you get to pre- and post-normalization, which takes
a huge amount of logic for add/subtract, not so much for multiply and
divide.
Newer FPGA families with 6 input LUTs, instead of the 4 input LUTs
of previous generations should be better.
Post by gtwrekOther than padding the FPGA Vendor pockets by doing more and more
useless things on a FPGA and driving up the resources consumed. This
causes bigger FPGAs to be (needlessly) selected.
Over 50 years ago, IBM design System/360 with a hexadecimal floating
point format. That is, the exponent is of 16 instead of 2. This is
especially convenient for fast hardware (that is, not microprogrammed)
implementations as it simplifies the barrel shifter needed.
Numerically, though, it was not so nice, and some new numerical
analysis methods had to be found to work with it. It might not
be so bad in an FPGA, though.