In C, -1 has a two's complement binary representation as "1111111...." for whatever size integer you're talking about. If you cast it to (char) like Pete did, you end up with "11111111" in binary, which is 0xFF in hex, and 255 in decimal.
I think you're confused... in C the char datatype is always defined as one byte. If you want to represent a unicode character you'd use a different type. (char)-1 == (char)255 is invariant.
|