So today I learnt that C# will see a byte as an int when performing an addition operation. I'll showcase what I mean.
Take this example
byte a = 10;
byte b = 10;
byte c = (byte)(a + b);
It will read a and b as two ints like so
0000 0000 0000 0000 0000 0000 0000 1010 +
0000 0000 0000 0000 0000 0000 0000 1010
which would result in this, due to how binary addition works.
0000 0000 0000 0000 0000 0000 0001 0100
Which then gets trimmed down to only 8 bits because of the (byte) cast. Thus, giving us 0001 0100.
My question is, why and how does C# first convert a and b to an int before performing the add operation and how does it do it?
Is the answer related to why this throws the exception Cannot implicitly convert type 'int' to 'byte'. An explicit conversion exists (are you missing a cast?)
byte a = 10;
byte b = 10;
byte c = a + b;