Flash Gordon wrote:
size_t is also a similar artificial limitation. The fact that arrays
The required properties of size_t force every C implementation to have
some limit, fixed not later than compile time, on the size of objects
and hence of arrays and strings. Given that, using size_t never
imposes any limit stricter than the implementation already does. But
int (and even unsigned int) may be to small to represent all valid
subscripts or offsets of a valid object; I think that's the point of
calling it an _artificial_ limit. In fact one of the (obscure and
rare) systems I use today does exactly that. And in fact I sometimes
write routines that use int or uint length/size for efficiency knowing
and accepting that this limits some possibly good uses of them.
can only take certain kinds of scalars as index parameters is also an
artificial limitation. But it turns out that basically every language
Not in C. Notice that 6.5.2.1 and 6.5.6 talk about 'integer' type, not
specifically 'int'. And similarly for declarations in 6.7.5.2. The
_values_ of these subscripts and bounds can't exceed some limit which
cannot be greater than SIZE_MAX (+1 where applicable), but the type
may be long long long long long int or whatever.
FORTRAN on the other hand does have this problem. Originally it had
only one size of INTEGER and dimensions and subscripts were of that
type. When F90 added multiple 'KINDs' (meaning widths) it specified
apparently for backward compatibility that bounds and subscripts are
of the 'default INTEGER KIND', which is also required to be the same
size (in storage at least) as the default = single precision REAL and
thus in practice must usually be the machine word, at least on
machines that have a recognizable word size. And there are machines,
and (particularly) now sizes of large number-crunching problems coded
in Fortran that people want to run, that exceed that default size.
and every array-like or string-like (with the notable exceptions of Lua
and Python) has a similar kind of limitation.
I'm not sure if you meant every language with any array-like or
string-like type, or every such type in every language. Although I
can't immediately think of a case where it makes a difference.
I'm pretty sure LISP allows array subscripting by bignums, but still
subject to available (virtual) memory which in reality <G> always has
some limit. I'm not sure about strings. LISPers traditionally didn't
focus much on things that would need long strings.
I don't recall what APL does here but I think it would be worth
checking; that language (or rather designer, Iverson) thought more
throroughly about mathematical 'sense' than any other I know.
- David.Thompson1 at worldnet.att.net