It's a very quick and easy / lazy way to measure the floating-point error in your existing double-precision algorithms; run the computation at quadruple precision and check for differences in the output. If there's any significant difference then you take a closer look.