3

If I have a struct like so:

struct example
{
   uint16_t bar:1;
   uint16_t foo:1;
};

Then I have another 16-bit variable: uint16_t value;

If I want the first bit in "value" to be assigned to "example.bar", and the second bit in value to be assigned to "example.foo", how would I do that?

EDIT:

I tried the following:

typedef struct
{
    unsigned short      in_1_16;
    unsigned short      in_17_32;
    unsigned short      in_33_48;
} READSTR;

typedef struct 
{   
    unsigned short      chan_1_16;
    unsigned short      chan_17_32;
    unsigned short      chan_33_48; 
} TLM;

void main()
{
    TLM tlm;
    tlm.chan_1_16 = 0xFFFF;
    READSTR t675;

    t675.in_1_16 = 0x10;
    t675.in_17_32 = 0x9;
    t675.in_33_48 = 0x8;

tlm.chan_1_16 |= t675.in_1_16 & 1;
tlm.chan_1_16 |= t675.in_1_16 & (1 << 1);
tlm.chan_1_16 |= t675.in_1_16 & (1 << 2);
tlm.chan_1_16 |= t675.in_1_16 & (1 << 3);
tlm.chan_1_16 |= t675.in_1_16 & (1 << 4);
tlm.chan_1_16 |= t675.in_1_16 & (1 << 5);
tlm.chan_1_16 |= t675.in_1_16 & (1 << 6);
tlm.chan_1_16 |= t675.in_1_16 & (1 << 7);
tlm.chan_1_16 |= t675.in_1_16 & (1 << 8);
tlm.chan_1_16 |= t675.in_1_16 & (1 << 9);
tlm.chan_1_16 |= t675.in_1_16 & (1 << 10);
tlm.chan_1_16 |= t675.in_1_16 & (1 << 11);
tlm.chan_1_16 |= t675.in_1_16 & (1 << 12);
tlm.chan_1_16 |= t675.in_1_16 & (1 << 13);
tlm.chan_1_16 |= t675.in_1_16 & (1 << 14);
tlm.chan_1_16 |= t675.in_1_16 & (1 << 15);
printf("%x", tlm.chan_1_16);

So I am setting the value in on struct to all ones (0xFFFF), and I am trying to set it to 0x10, bit by bit. But when I run this code I still get 0xFFFF. Not sure what I am doing wrong?

Blade3
  • 3,882
  • 13
  • 39
  • 56

4 Answers4

3

It is impossible to tell, because bit fields are very poorly defined by the standard. The following, you cannot know:

  • whether bit field int is treated as signed or unsigned int
  • the bit order (lsb is where?)
  • whether the bit field can reach past the memory alignment of the CPU or not
  • the alignment of non-bit field members of the struct
  • the memory alignment of bit fields (to the left or to the right?)
  • the endianess of bit fields
  • whether plain int values assigned to them are interpreted as signed or unsigned
  • how bit fields are promoted implicitly by the integer promotions
  • whether signed bit fields are one’s complement or two’s complement
  • location of padding bytes
  • how padding bits are treated
  • values of padding bits
  • and so on

As you hopefully can tell, bit fields are very very bad and shouldn't be used ever. The only sensible solution is to re-write the code without bit fields:

#define FOO 0x01U
#define BAR 0x02U

uint16_t value;

value |= FOO; /* set FOO bit */
value &= ~FOO; /* clear FOO bit */

The above code can be executed on any C/C++ compiler for any system in the world and you will get the expected result.

Lundin
  • 174,148
  • 38
  • 234
  • 367
  • 2
    It can be executed on C or C++ compiler, but for C++ using `#define` is BAD :) – Matthieu M. Feb 02 '11 at 16:04
  • 3
    @Matthieu M. - It's as bad as you think it is, and it's as dangerous as you are ignorant about it. The above use of macros is perfectly safe. Using a global `const` variable makes it no more or less safe. Dogmatic rejection of feature X is invariably misguided. (Except `gets()`. There is no safe use case for `gets()`.) – Chris Lutz Feb 02 '11 at 16:16
  • 1
    @Chris: Dogmatic means without valid reasons, in C++ `const` variables are much safer than macros because they are treated by the compiler, not the preprocessor, and thus respect scoping rules. I'd note though, that I used a smiley after "BAD" in order to signal that it was mostly a pun... – Matthieu M. Feb 02 '11 at 16:26
  • @Matthieu: #define is neither good or bad for constants. A "const unsigned int" would yield exactly the same machine code, with no advatages nor disadvantages in typecasting. The *only* difference in this case, is where in read-only memory the constant will be allocated. Something that may have been bad, would be enum constants. Those may or may not be signed, which can get you in trouble with integer promotions unless you explicitly typecast them to the desired type. – Lundin Feb 02 '11 at 20:30
  • As for scoping rules, they are hardly dangerous for constants. The only damage you will cause if you expose a global const to the rest of the program is possible name collisions. They can never cause spaghetti code like global read/write variables can. – Lundin Feb 02 '11 at 20:33
  • @Lundin: actually, no memory will be allocated for defines. – Matthieu M. Feb 03 '11 at 07:16
  • @Matthieu: Then where did you think the numbers used by the program come from? Everything used by your program has to be stored somewhere, that is very fundamental logic. When you #define a constant, then number gets hardcoded together with the machine code, just as anonymous integer/string literals are. The amount of memory used is the same as if it was an explicit const. – Lundin Feb 03 '11 at 07:36
  • 1
    @Lundin: not necessarily, unless the `const` can be merged together, which is only possible if the compiler may prove that their address is not taken, each `const` will have a dedicated memory area, whereas the `#define` constants and enums can simply be hardcoded "in place". – Matthieu M. Feb 03 '11 at 07:54
  • @Matthieu: How this is handled is highly system/compiler specific. You can't say, from a general C programming point-of-view, where a "const" will end up in the program memory. On some systems, merging consts together would be strictly forbidden, for example. – Lundin Feb 04 '11 at 12:44
1
#include <iostream>

typedef unsigned short uint16;

struct example
{
   uint16 bar:1;
   uint16 foo:1;
};

union foo
{
    struct example x;
    uint16 value;
};

int main()
{
    uint16 bar = 0x01;

    foo f;
    f.value = bar;

    std::cout << f.x.bar << std::endl; // display 1
    std::cout << f.x.foo << std::endl; // display 0
}

EDIT
This snippet was used in production code and it used to work as intended. Anyway, it was tested with static assert and unit-tests that guaranteed its correctness. Since they're not reported here, I agree it should not be used "as is".

Since you (the OP) mentioned C++, I suggest to use std::bitset class:

#include <bitset>

int main()
{
    std::bitset<16> set(bar);

    example x;
    x.bar = set[0];
    x.foo = set[1];

    std::cout << x.bar << std::endl; // display 1
    std::cout << x.foo << std::endl; // display 0
}
Simone
  • 11,230
  • 1
  • 28
  • 41
  • 2
    Is that portable? http://stackoverflow.com/questions/1490092/c-c-force-bit-field-order-and-alignment – Tony Lee Feb 02 '11 at 15:39
  • The actual bit position of *bar* and *foo* are implementation dependent. *bar* may be the most significant bit or the least significant bit, or in the case of creating bitfields using type *int* with a Cosmic compiler, you may find that bytes are swapped. – oosterwal Feb 02 '11 at 15:39
  • the reverse will not work... that is, if you do `f.x = _example;` and `cout << f.value` ... would not work.. – Nawaz Feb 02 '11 at 15:43
  • Is there a guarantee that `struct example` will actually be 16 bytes, or should you have another `uint16 :0;` at the end to pad it to the proper size? Also, `uint16` should _probably_ be `uint16_t`. Don't perpetrate poor code that the OP posts. People post code here for a reason - it's _broken_. – Chris Lutz Feb 02 '11 at 15:45
  • 1
    -1. This is entirely unportable, and not even ISO C/C++, as bit fields are required to be of int type, not short. It is unportable because the bit order of the bit field is not specified by the standard, nor is the bit alignment. And beyond that, your example would also depend on big/little endian. As you can tell, bit fields are *bad* in numerous ways. – Lundin Feb 02 '11 at 15:46
  • I feel the whole "unportable" issue is a little overestimated. If I know that I have to work with a certain compiler and a certain OS I can't see nothing wrong as long as I make sure that the preconditions are met. Said that, of course you are free of downvote as long as you wish. – Simone Feb 03 '11 at 07:17
  • @Simone: Unless you intend to re-use code between projects and not re-invent the wheel each time. I have yet to encounter a professional programmer who doesn't re-use their old code later on. – Lundin Feb 03 '11 at 07:31
  • @Lundin I successfully reused that code, so I guess I'm a professional programmer. Cool :-) – Simone Feb 03 '11 at 07:52
0

try it with:

struct example x;
x.bar = value & 1;
x.foo = value & 1 << 1;

fooN = value & 1 << N;
Jonathan Leffler
  • 698,132
  • 130
  • 858
  • 1,229
Lars
  • 95
  • 2
  • 1
    +1, though I'd wrap `1 << 1` etc. in parens just to make the precedence clear. – j_random_hacker Feb 02 '11 at 15:39
  • This code is entirely unportable as you don't know where "bar" and "foo" resides in the bit field. – Lundin Feb 02 '11 at 15:58
  • @Lundin: why would you need where they are, the principle of the bitfield is to name the bits to address them independently, it's up to the compiler to decide where to tuck them... isn't it ? – Matthieu M. Feb 02 '11 at 16:05
  • 2
    @Lars, @Jonathan Leffler: What rubbish! x.foo will always be zero here, because `value & 1 << 1` can never be equal to 1 (whatever the precedence rules) -- it can only equal 0 or 2. It's answers like this that make me wish I could downvote more than once. It should be: `x.foo = value >> 1 ;` – TonyK Feb 02 '11 at 19:50
  • @Matthieu: It matters as soon as you try to do any form of bit manipulation on them. Come to think of it, it doesn't matters in this code, because the only value you can assign to a 1 bit in a bit field is 1 or 0. So the posted code is harmless, as it does nothing. – Lundin Feb 02 '11 at 20:41
  • @Lundin: thanks, I was afraid I had missed something. I would say that the principle of using bitfields is precisely because it relieves you from the tedium of bit manipulations. So from your comment I deduce that either you do it all, or let it all to the compiler but that there is no half-way stance. – Matthieu M. Feb 03 '11 at 07:19
  • @Matthieu: The only reliable use of bit fields is as a chunk of boolean values that you don't care how or where they are stored. If you attempt to use bit fields to hold integer values or use them for any form of memory mapping (register definitions for example), then the result is undefined by the standard and the compiler is pretty free to conjure any random number as result of an operation, see my post below for an explanation why. – Lundin Feb 03 '11 at 07:42
  • @TonyK is right. For some reason I imagined there would be an implicit conversion to `bool` before assigning to a 1-bit bit field (which would convert any nonzero value to 1) but there isn't. Should use `static_cast(value & 1 << 1)` or `value >> 1` on the RHS instead. Interestingly, 4.7/{2,3} in the standard indicates that if the bit field type was not marked `unsigned` then we would need to mask manually -- i.e. write `value >> 1 & 1`. – j_random_hacker Feb 03 '11 at 13:27
0

Values assigned to unsigned bitfields are defined to be reduced modulo 2**N, where N is the number of bits in the bitfield. This means that if you assign an unsigned number to an unsigned bitfield, the least significant N bits of the original number will be used.

So, given struct example x, if you want the least significant bit of value to be placed in x.bar you can simply do:

x.bar = value;

If you want the second-least-significant bit of value to be placed in x.foo, you can likewise use:

x.foo = value >> 1;
caf
  • 225,337
  • 36
  • 308
  • 455