3

I know there are a couple of thread on similar topics ( What's the best (for speed) arbitrary-precision library for C++? and The best cross platform (portable) arbitrary precision math library ) and I take from these threads than GMP or something based on it like MPFR is the fastest lib available, but I am specifically wondering: if I only wanted say 30 dec places would __float128 of the quadmath lib be faster?

Also how does MAPM stack up against MPFR?

It looks from this website:

http://pari.math.u-bordeaux.fr/benchs/timings-mpfr.html

that MPFR does pretty well, but there are also CLN and apfloat?

Community
  • 1
  • 1
fpghost
  • 2,644
  • 3
  • 28
  • 50
  • 1
    As a side note (you probably are aware of this, but just to be sure), a quad-precision float doesn't give you ~30 decimal places but 113 binary places. – Christian Rau Oct 29 '12 at 08:17
  • I know quad is 120bits in total, but I thought that equated roughly to 30deci? – fpghost Oct 29 '12 at 10:24
  • 1
    You're of course right (though, it's 128 bits in total). I just wanted you to keep in mind that binary floating point numbers don't give you actual decimal precision. It's very likely you already know this, but just to be sure. So don't expect your exact decimal number with 30 places to be represented **exactly** in binary quad-precision, they will only *equate roughly* to each other. – Christian Rau Oct 29 '12 at 10:37
  • 1
    A double-double library might be the fastest (a bit less precision than float128, but uses the FPU), unless you have hardware float128 (some IBM computers I think). But if you need many weird functions, availability is likely the only criterion you can afford. – Marc Glisse Jan 09 '13 at 15:54

1 Answers1

1

If you only want 30 decimal places, I'm pretty sure (without actually benchmarking it), that GMP or MPFR can not be as fast as some simple home-brewn routines. That depends a bit on the operations you need, of course.

The reason behind this is that GMP et al. are really well-written with very large integers in mind (like 1,000+ or even 1,000,000+ decimal digits). But for very small -- yet too big for int -- bignums, the overhead of some extra if's, calls or even memory allocation will kill them in every benchmark.

It's an easy exercise to build your own C++ functions that give you 128 bits addition/subtraction from 2 64 bit unsigned's. Even multiplication is done easily. Division might not be as simple, but do you need that? More elaborate functions (roots, log, etc) are much more work and then GMP et al. will maybe pay off.

cxxl
  • 4,353
  • 3
  • 27
  • 50
  • thanks for the reply, unfortunately I do need the elaborate functions. – fpghost Dec 01 '12 at 09:15
  • What exactly do you need? And in which precision? – cxxl Dec 01 '12 at 09:59
  • 50-60dp of precision and just about everything from log, abs, pow, roots and more. – fpghost Dec 01 '12 at 10:22
  • As long as the results are again integers, square and cube roots, logarithm and integer powers are not too difficult. See "Hacker's delight" by H S Warren for details. – cxxl Dec 01 '12 at 14:04
  • The question is about MPFR and __float128. Why is this answer about integers and integer operations? – Pascal Cuoq Jul 19 '14 at 19:11
  • Because any float calculation can well be accomplished with integer arithmetic as well. He knows the precision he needs, that makes it easier to use integers. – cxxl Jul 21 '14 at 21:08