3

Possible Duplicate:
Why can't I return a double from two ints being divided

This statement in C with gcc:

float result = 1 / 10;

Produces the result 0.

But if I define variables a and b with values 1 and 10 respectively and then do:

float result = a / b;

I get the expected answer of 0.1

What gives?

Community
  • 1
  • 1
CT Hildreth
  • 207
  • 1
  • 2
  • 8

4 Answers4

11

When the / operator is applied to two integers, it's an integer division. So, the result of 1 / 10 is 0.

When the / operator is applied to at least one float variable, it's a float division. The result will be 0.1 as you intend.

Example :

printf("%f\n", 1.0f / 10); /* output : 0.1 (the 'f' means that 1.0 is a float, not a double)*/
printf("%d\n", 1 / 10); /* output : 0 */

Example with variables :

int a = 1, b = 10;

printf("%f\n", (float)a / b); /* output : 0.1 */
Sandro Munda
  • 38,310
  • 24
  • 95
  • 118
4

That happens because 1 and 10 are integer constants, so the division is done using integer arithmetic.

If at least one of your variables a and b is a float, it will be done using floating-point arithmetic.

If you want to do it with number literals, use the notation to make at least one of them a float literal, for example:

float result = 1.0f / 10;

Or cast one of them to float, that would be a bit more elaborate:

float result = 1 / (float)10;
Jesper
  • 195,030
  • 44
  • 313
  • 345
4

1 and 10 are both integers and will return an integer, when you define a and b you're defining as a float. If you use 1.0 and 10.0 it will return the correct result

msmucker0527
  • 5,056
  • 2
  • 21
  • 36
0

If you want float than just cast it as follow.

float result = (float)a/b;

MCG
  • 973
  • 1
  • 9
  • 21