Note: this code assumes that there are only two possible
representations. That's a good approximation to reality, but it's not
the exact truth. If 'int' is a four-byte type (which it is on many
compilers), there's 24 different byte orders theoretically possible, 6
of which would be identified as Little Endian by this code, 5 of them
incorrectly. 18 of them would be identified as Big Endian, 17 of them
incorrectly.
This would all be pure pedantry, if it weren't for one thing: of those
24 possible byte orders, something like 8 to 11 of them (I can't
remember the exact number) are in actual use on real world machines.
Even that would be relatively unimportant if bigendian and littlendian
were overwhelmingly the most popular choices, but that's not even the
case: the byte orders 2134 and 3412 have both been used in some fairly
common machines.
The really pedantic issue is that the standard doesn't even guarantee
that 'char' and 'int' number the bits in the same order. A conforming
implementation of C could use the same bit that is used by an 'int'
object to store a value of '1' as the sign bit when the byte containing
that bit is interpreted as a char.
No, because you cannot dereference a pointer to void.
The key differences between char* and void* are that
a) you cannot dereference or perform pointer arithmetic on void*
b) there are implicit conversions between void* and any other pointer to
to object type.
The general rule is that you should use void* whenever the implicit
conversions are sufficiently important. The standard library's mem*()
functions are a good example where void* is appropriate, because they
are frequently used on pointers to types other than char. You should use
char* whenever your actually accessing the object as an array of
characters, which requires pointer arithmetic and dereferencing. You
should use unsigned char* when accessing the object as an array of
uninterpreted bytes.
There's no such thing as a typecast in C. There is a type conversion,
which can occur either implicitly, or explicitly. Explicit conversions
occur as a result of cast expressions.
The (char*) cast does not convert an integer into a char. It converts a
pointer to an int into a pointer to a char. The char object it points at
is the first byte of 'num'. The * operator interprets that byte as a char.
The result of the cast expression is a pointer to char; it can be
converted into a char and stored into a char variable, but the result of
that conversion is probably meaningless unless sizeof(intptr_t) == 1,
which is pretty unlikely. It would NOT, in general, have anything to do
with the value stored in the first byte of "num".
You could write:
char c = *(char*)#
The only type conversions that are reasonably safe in portable code are
the ones which occur implicitly, without the use of a cast, and even
those have dangers. Any use of a cast should be treated as a danger
sign. The pattern *(T*), where T is an arbitrary type, is called type
punning. In general, this is one of the most dangerous uses of a cast.
In the case where T is "char", it happens to be relatively safe.
The best answer to your question is to read section 6.3 of the standard.
However, it may be hard for someone unfamiliar with standardese to
translate what section 6.3 says into "safe" or "unsafe", "portable" or
"unportable". Here's my quick attempt at a translation:
* Any value may be converted to void; there's nothing that you can do
with the result. The only use for such a cast would be to shut up the
diagnostics that some compilers generate when you fail to do anything
with the value returned by a function. However, it is perfectly safe.
* Converting any numeric value to a type that is capable of storing that
value is safe. If the value is currently of a type which has a range
which is guaranteed to be a subset of the the range of the target type,
safety is automatic - for instance, when converting "signed char" to
"int". Otherwise, it's up to your program to make sure that the value is
within the valid range.
* Converting a value to a signed or floating point type that is outside
of the valid range for that type is not safe.
* Converting a numeric value to an unsigned type that is outside the
valid range is safe, in the sense that your program will continue
running; but the resulting value will be different from the original by
a multiple of the number that is one more than the maximum value which
can be stored in that type. If that change in value is desired and
expected (D&E), that's a good thing, otherwise it's bad.
* Converting a floating point value to an integer type will loose the
fractional part of that value. If this is D&E, good, otherwise, bad.
* Converting a floating point value to a type with lower precision will
generally lose precision. If this is acceptable and expected, good -
otherwise, bad.
* Converting a _Complex value to a real type will cause the imaginary
part of the value to be discarded. Converting it to an _Imaginary type
will cause the real part of the value to be discarded. Converting
between real and _Imaginary types will always result in a value of 0. In
each of these cases, if the change in value is D&E, good - otherwise, bad.
* Converting a null pointer constant to a pointer type results in a null
pointer of that type. Converting a null pointer to a different pointer
type results in a null pointer of that target type. Both conversions are
safe.
* Converting a pointer to an integer type is safe, but unless the target
type is either an intptr_t or a uintptr_t, the result is
implementation-defined, rendering it pretty much useless, at least in
portable code. If the target type is intptr_t or uintptr_t, the result
may be safely converted back to the original pointer type, and the
result of that conversion will compare equal to the original pointer.
You can safely treat that integer value just like any other integer
value, but conversion back to the original pointer type is the only
meaningful thing that can be done with it.