Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's a very quick and easy / lazy way to measure the floating-point error in your existing double-precision algorithms; run the computation at quadruple precision and check for differences in the output. If there's any significant difference then you take a closer look.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: