1

I happened to come across a question here that mentioned trying to assign a float to -0.0 or something along those lines. However, from what I have read so far negative zero is the same as positive zero, so why not just have zero? Why do the two exist?

Andy
  • 3,520
  • 11
  • 51
  • 84

1 Answers1

4

Each possible floating point valuue actually represents a small range of possible real world numbers (because there are only a finite number of possible floating point numbers but an infinite number of actual values). So 0.0 represents a value anywhere between 0.0 and a very small positive number, whereas -0.0 represents a value anywhere between 0.0 and a very small negative value.

Note however they when we compare 0.0 and -0.0 they are considered to be equal, even though the actual representation in bits is different.

Paul R
  • 202,568
  • 34
  • 375
  • 539
  • Thank you for your answer. I understand now, but what is the use of this? Also, just out of interest, how are the two represented in bits? – Andy Sep 08 '13 at 10:59
  • The "use" is mathematical - for some algorithms you need to preserve the sign of a value even when it becomes vanishingly small. 0.0 is 0x00000000, -0.0 is 0x80000000 - the only difference is the sign bit (bit 23). – Paul R Sep 08 '13 at 12:39
  • Ok, thank you for that information. Very interesting. – Andy Sep 08 '13 at 13:03