U
Uncle Steve
It can be bad because you might end up on a machine where 32-bit types
are intrinsically slower than 64-bit types, so by mandating that ONLY
32 bits be used, you're slowing the program down.
I suppose it's possible, but that seems a bit like putting the cart
before the horse. If I say "do not blaspheme" because it might
piss-off God, I am presupposing the existence of God, which could very
well be false. You are presupposing that a hardware manufacturer is
going to build a CPU with those characteristics. How likely is that?
See above. Imagine that you have something that you know only needs to
be 16 bits, so you specify i16 for it. You have an array of them. You're
on a machine where access on anything other than 64-bit boundaries adds
noticeable latency. Poof!
Well now. I think I said (or implied) that I was attempting to avoid
constructs that break when assumptions about type sizes are wrong.
You seem to be saying that program correctness is subordinate to
performance in this instance. I beg to differ.
This kind of thing really does happen. There's a reason C gives minimum
ranges for the basic types.
I suppose it does, but I am not going to assume something for which
there is no supporting evidence, in relation to this particular topic.
Machine-specific code which does something that's stupid on a particular
machine is not faster than portable code which lets the compiler pick
something smart.
In general, the vast majority of code does not gain any performance from
trying to be more machine-specific, and the cases where it hurts, it hurts
BADLY.
Exploding software hurts too. At any rate, I'm not being
machine-specific, I am attempting to be type-specific within an
arbitrary algorithm across multiple platforms. Those are two
very different approaches.
Regards,
Uncle Steve