Your example appears to be more a shortcoming of the way Java handles
data than that of strongly typed variables. Strong typing limits what
you can do to a variable, you can't assign an ascii value to a strongly
typed integer variable however languages which do not support strong
typing such as Fortran IV will allow this to happen. As pointers in C
are not strongly typed so you could perform the same kind of operation.
Where as pointers in Pascal are strongly typed and so such operations
are not permitted.
In C, if I have double trouble[1048576][7] then trouble[4][2] has type
double and there is no problem with us working arithmetically with that
single value.
But in C, trouble[4] also has a type -- it is double [7] which is to
say an array of 7 doubles. Hence, if one were using a strongly-typed C,
one would be able to access the doubles in the array "trouble" at most
7 at a time, unless the language were to be enhanced with array
operators (such as in IDL.) This isn't a matter of wanting to store
characters into the space used by the doubles, this is a matter of
the type system.
In any given language, there might not -be- a type system, or there
might be a type system that operates only with the primitive types --
but as soon as you start being able to build aggregate types that are
considered distinct from "just a convenient way to organize primitive
data types", then if you can refer to array sub-sections at all
in the language, those sub-sections have types of their own and
a "strongly typed" system would enforce those types.
Continuing the example, examine trouble[4][9] -- to a strongly typed
system, that's a type error, as trouble[4] is not a type with 10
elements.
For the mathematical analysis we were doing, we often needed to
pass a subsection of a large array. In C as it is now, that's
trivial to do efficiently, as we can make use of the synonym
between the address of an array (&trouble), the address of the
first element of its first dimension (&trouble[0]), and the
address of the first element of the first element of its first
dimension (&trouble[0][0]). C allows all of these to be passed
into a routine that has its corresponding parameter declared
as any of double* or double[] or double[][] . C's type laxity allows
us to access what we know to be a block of consequative memory
in any way that is convenient to us -- but in a strongly typed system,
we would be constrained to only access the memory in the same way
it was declared.
There are languages in which multidimensional arrays have unspecified
internal structure -- allowing the implementation to transparently
move between straight blocks of storage, or vectors of pointers to
vectors of storage, or various sparse representation techniques.
(e.g., Mathematica.) For those languages, one must enforce strong
typing of array subsections (or deny the possibility of those),
or the language must provide a mandatory accessor function along with
the data pointer, or the language must provide a way to allow the current
storage arrangement to be examined.
If one was hoping for efficiency by memcpy()'ing an array area larger
than the fastest-varying dimension, and one cannot force a particular
representation, then only the latter of those three possibilities
(ability to examine the internal representation) offers any hope of
that at all: one cannot get memory-block efficiencies without
escaping from strong typing.