0

So today I learnt that C# will see a byte as an int when performing an addition operation. I'll showcase what I mean.

Take this example

byte a = 10;
byte b = 10;
byte c = (byte)(a + b);

It will read a and b as two ints like so

0000 0000 0000 0000 0000 0000 0000 1010 +
0000 0000 0000 0000 0000 0000 0000 1010

which would result in this, due to how binary addition works.

0000 0000 0000 0000 0000 0000 0001 0100

Which then gets trimmed down to only 8 bits because of the (byte) cast. Thus, giving us 0001 0100.

My question is, why and how does C# first convert a and b to an int before performing the add operation and how does it do it?

Is the answer related to why this throws the exception Cannot implicitly convert type 'int' to 'byte'. An explicit conversion exists (are you missing a cast?)

byte a = 10;
byte b = 10;
byte c = a + b;
JohnA
  • 508
  • 1
  • 4
  • 16
  • 2
    The reason is because C# copies C and C++ where _byte addition is undefined_ because byte addition will invariably overflow making it useless for arithmetic. If you still actually want that then cast the int result back to a byte (as for performance: operating on individual bytes is very likely much slower than your machine's word-size anyway). – Dai Apr 21 '22 at 04:02

0 Answers0