The crash happens when the JNI library executes memcpy command.
I think the code should be ok because it already works for a long time.
For starters, top-posting is considered bad form in Usenet.
Anyways, "it's always worked before" is not exactly a sign that the code
in question actually works [1]. Although, giving my experience with
debugging C code, a crash here is generally a sign of a more pernicious
latent problem that only happened to manifest itself.
yep...
there are many evils which can lurk in a piece of code only to manifest
later. something can also work flawlessly in one place and fail
miserably in another.
hence, one needs to test their code on any relevant target, but OTOH,
one can set limits to how and where their code will work (say: this only
works on 32-bit x86, or this will work on 32 or 64 bit x86, or similar...).
What do you mean by "the applet is 32-bit"? Java bytecode is completely
independent of architecture (indeed, that is its point), so the only
things that could be 32-bit or 64-bit is the JVM that you are running or
the native library code being called by said applet.
well, in some sense, the bytecode is always 32 bit, given certain
properties:
long and double require 2 spots in the constant pool, locals frame, on
the stack, and in argument lists;
dup_x (dup_x1, dup_x2, dup2_x1, dup2_x2) will exhibit a lot of
funkiness, essentially treating the long/double entries multiple entries;
....
(and in another sense it would seem better suited for a MIPS or SPARC
based interpreter than an x86 based one).
granted, it works just the same on 64 bits, since needing 128 bits to
store them in this case (naive possibility A), or glossing over the
issue in the JIT (say, the extra spot becomes 'void' and is not assigned
any physical storage).
OTOH, I had before wrote a translator which tried to coerce these types
into only a single conceptual stack/args entry, but this made the dup_x
instructions very awkward...
granted, trying to make long/double be single entries on a 32-bit
interpreter would likely be more awkward, because it would either mean
naively using larger spots for all the other entries, using indirect
storage (larger types are internally passed by reference), or
type-boxing (expensive).
again, in this case, a JIT probably wouldn't care much.
[1] Random digression. It's slightly annoying when you are trying to
point out why to not implement something in a certain way, and the
example you come up with on the fly would seem to give the same result.
Even more annoying is when the explanation as to why it happens in this
particular case is a ways beyond the scope of the class.