From jacob_navia reply to Seebs (sorry, I lost the exact attribution):
\subsection{What is a type?}
A first tentative, definition for what a type is, could be “a type is a
definition of an algorithm for understanding a sequence of storage
bits”. It gives the meaning of the data stored in memory. If we say
that the object a is an int, it means that the bits stored at that
location are to be understood as a natural number that is built by
consecutive additions of powers of two as specified by its bits.
First, a nit: 'definition of an algorithm' is redundant; 'algorithm'
is sufficient. For a tutorial reader, I think it would be enough to
just say 'rule' or 'set of rules'. (To a formalist or mathematician,
'algorithm' must be effectively computable where 'rules' might not be,
but the rules here are so simple this distinction is uninteresting.)
Second, more significant, to someone who knows what 'natural number'
means this is very misleading. Naturals exclude negative, while C int
includes them. To *some* mathematicians, naturals also exclude zero;
if zero is included, naturals are a reasonable mathemetical analog to
C _unsigned_ integer types. And unsigned-int might be a better
tutorial example, because you don't have to explain the asymmetric
behavior of two's-complement, much less the different behavior of
ones'-complement and sign&magnitude.
Third, a nit: the additions here are mathematical ones and behave
properly, so order doesn't matter and 'consecutive' adds nothing.
If we say that the type of a is a double, it means that the bits are to
be understood as the IEEE 754 (for instance) standard sequences of bits
representing a double precision floating point value.
Yes. And some-integer versus some-floating is the easiest example to
explain for C. In some other languages, characters are a completely
different type than integers, and in some so are enumerations. But in
C these are just variants or as you say 'refinements' of integers, so
using them to explain types would be more difficult and less clear.
In addition to definining the values of representations (and the
representations of values), types also define the _operations_ on
those values. In particular, addition and multiplication of unsigned
int are (and must be) quite different from those for double. Addition
of pointer and integer is somewhat different, but less dramatically.
A second, more refined definition would encompass the first but add the
notion of "concept" behind a type. For instance in some machines
the type "size_t" has exactly the same bits as an unsigned long, yet,
it is a different type. The difference is that we store sizes in
size_t objects, and not some arbitrary integer. The type is associated
with the concept of size. We use types to convey a concept to the
reader of the program.
The sizes of objects, or in simple terms of memory areas. And often
also offsets or subscripts within those obects, although you may not
want to go into that here. But not the sizes of, for examples, a box
to be shipped by post, or a floor to be covered with tiles.
I sometimes talk about this as 'purpose' rather than 'concept'.
I think that's slightly more specific, although not much.
The base of C's type hierarchy are the machine types, i.e. the types
that the integrated circuit understands. C has abstracted from the
myriad of machine types some types like 'int' or 'double' that are
almost universally present in all processors.
There are many machine types that C doesn't natively support, for
instance some processors support BCD coded data but that data is
accessible only through special libraries in C.
I'm not sure about 'many', at least in comparison to those supported,
but certainly some. BCD is a good example. Actually it needs either
library(ies) or extension(s) -- I've used one compiler that did a
core-language extension. But either way it's not standard C.
<snip rest>