4

what is the purpose of signed char if both char and signed char ranges from -127 - 127? what is the place where we use signed char instead of just char?

Caffeinatedwolf
  • 1,067
  • 3
  • 18
  • 24

4 Answers4

19
  • unsigned char is unsigned.

  • signed char is signed.

  • char may be unsigned or signed depending on your platform.

Use signed char when you definitely want signedness.

Possibly related: What does it mean for a char to be signed?

Community
  • 1
  • 1
Lightness Races in Orbit
  • 369,052
  • 73
  • 620
  • 1,021
  • +1 ... depending at least on your platform, compiler, and compiler options. – pmg Aug 09 '11 at 14:17
  • Thinking a bit more about it ... the platform itself has no relation to how plain char is interpreted: it is only a function of the compiler and its options. – pmg Aug 09 '11 at 14:25
  • @pmg: I suppose by platform I _mean_ "development environment"; without going into details, I think it'll do as a catch-all. – Lightness Races in Orbit Aug 09 '11 at 14:31
  • Strictly speaking, you should be able to just say "implementation". But it's just as confusing when people realise that `gcc` and `gcc -funsigned-char` should perhaps then be considered different implementations. It's a function specifically of the platform in the sense that it *could* affect the ABI. Although it normally doesn't, since with 2's complement you can treat corresponding signed and unsigned types the same, so the platform frees up the compiler to do whatever. So I'd say it definitely depends on all the things it depends on :-) – Steve Jessop Aug 09 '11 at 15:38
  • Extremely right. On ARM platforms char is by default unsigned. – Fred Aug 09 '11 at 15:42
  • That said, you'd get some funny old behavior if you linked a library in which `(c1 < c2)` against a library compiled differently so that `(c2 < c1)` -- even if the ABI means they can pass `char` values between them with the same result as an explicit conversion between signed and unsigned, you could easily break the documented contract of the function you're calling as a result of that conversion. – Steve Jessop Aug 09 '11 at 15:45
  • @Steve: That's why using `char` as part of a "public" API is a silly idea. – Lightness Races in Orbit Aug 09 '11 at 19:27
  • @Tomalak: But you don't have much choice if you want to pass strings. Passing a `char*` still means you need both sides to agree what the representation of `char` is. I assume that part of the reason why `strcmp` is defined to interpret the values as `unsigned char` is that you don't want international characters to collate before punctuation, even if `char` is signed. So that suggests a way to design APIs around this inconsistency, if needed. – Steve Jessop Aug 10 '11 at 08:51
7

It is implementation defined whether plain char uses the same representation as signed char or unsigned char. signed char was introduced because plain char was underspecified. There's also the message you send to your readers:

  • plain char: character data
  • signed char: small itegers
  • unsigned char: raw memory

(unsigned char may also be used if you're doing a lot of bitwise operations. In practice, that tends to overlap with the raw memory use.)

James Kanze
  • 146,674
  • 16
  • 175
  • 326
2

See lamia,

First I want to prepare background for your question.
................................................

char data type is of two types:

unsigned char;

signed char;

(i.e. INTEGRAL DATATYPES)

.................................................

Exaplained as per different books as:
char 1byte –128 to 127 (i.e. by default signed char)

signed char 1byte –128 to 127

unsigned char 1byte 0 to 255


.................................................

one more thing 1byte=8 bits.(zero to 7th bit)

As processor flag register reserves 7th bit for representing sign(i.e. 1=+ve & 0=-ve)

-37 will be represented as 1101 1011 (the most significant bit is 1),

+37 will be represented as 0010 0101 (the most significant bit is 0).


.................................................

similarly for char last bit is by default taken as signed

This is why?

Because char also depends on ASCII codes of perticular charectors(Eg.A=65).

In any case we are using char and using 7 bits only.

In this case to increase memory range for char/int by 1 bit we use unsigned char or unsigned int;

Thanks for the question.

1

Note that on many systems, char is signed char.

As for your question: Well, you would use it when you would need a small signed number.

Šimon Tóth
  • 34,358
  • 19
  • 97
  • 147
  • 1
    `char` is _never_ `signed char`. They are two **distinct** types. However, on many systems, `char` has a signed underlying representation, making them practically interchangeable. – Lightness Races in Orbit Aug 09 '11 at 13:58
  • 2
    Note that the practical consideration that comes out of Tomalak's comment is that C has **no implicit conversion** between `char *` and `signed char *`. You must always cast, either explicitly, or to `void *` and let the compiler do the second conversion. – R.. GitHub STOP HELPING ICE Aug 09 '11 at 14:03
  • Also note that most compiler fail to enforce this and issue at most a warning. This is wrong. – R.. GitHub STOP HELPING ICE Aug 09 '11 at 14:03
  • 1
    @R..: issuing a warning is all that's needed to conform with the standard, presumably you mean morally wrong? – Steve Jessop Aug 09 '11 at 14:07
  • I guess I messed that up a little bit. Technically what's wrong is that they're issuing "at most" a warning. It should be "at least", i.e. the warning should always be issued without any `-W` options, and optionally (preferably) an error. *Morally*, of course, warnings should be for dubious usage that probably indicates a programming error, and things like constraint violations that are just plain invalid C without the need to analyze program flow should be errors. :-) – R.. GitHub STOP HELPING ICE Aug 09 '11 at 17:02
  • Sir if i do `unsigned char a = 12` vs `signed char a = 12` , what is the difference between bit patter stored ? – Suraj Jain Dec 28 '16 at 08:32