Ö
Öö Tiib
Efficiently and transparently?
char (* a)[50000] = new char[50000][50000];
Efficiently and transparently?
Efficiently and transparently?
char (* a)[50000] = new char[50000][50000];
Efficiently and transparently?
* Ian Collins:
Depends to some degree on what you mean. First note that there can only
be 1 such array of bytes at any time (in any given process). So it's a
special case, used for some special purpose.
If we're talking about a convention of not mixing signed/unsigned, then
it's no big deal: you know about this special 1 array, and you have no
need to obtain its size in your own code. You just make sure to store an
end-pointer at the time when you allocate the array (at that point using
unsigned size), and access the array elements via pointers or pointer
based local part indexing instead of all-over-the-array numerical
indexing. That's that.
That might sound like if you need to be extra careful with this array
just because of the adoption of a convention of not mixing signed/unsigned.
But you need to be extra careful anyway, since that array *is* a special
case no matter how it's handled.
For given p1 and p2 pointing to bytes of that array, evaluating p2-p1 is
potentially Undefined Behavior (via §5.7/5) since the result is of
signed ptrdiff_t type.
That is, that special array is a special handle-with-extreme-care case
no matter how you handle it, including if you use unsigned types for
indexing and size; it can be handled efficiently, but not transparently
as if it was any other array.
Summing up, if the signed ptrdiff_t is sufficient for handling some
array without UB, then (master of tautologies demonstrates his mastery)
ptrdiff_t is sufficient for handling that array without UB, and
otherwise it isn't.
Alf, you should consider politics as a change of career!
Was that a yes or a no?
Leigh said:N.B. said signed pointer arithmetic can only address the range that
ptrdiff_t provides. This is less of an issue on 64-bit platforms
which typically have much less memory than a 64-bit address might
imply.
Right, so we can argue that having a single array using more that half
the available virtual address space is an anomaly. Attempts to solve
the problem includes 1) using unsigned integers for indexing, or 2)
changing the size of the address space.
Here 2) is the proper solution, and 1) is a cure that is just as bad
as the disease.
Alf said:At the application level code that concerns only 1 case, namely a >2GB
array of bytes.
It seldom happens, and can be dealt with if it does happen.
Pete said:Don't get me started. Java was designed for beginners.
That was
especially evident in the early versions of the Java library, which
was designed in ways that beginners would love but experienced
programmers would find limiting.
I firmly believe that that was
because the library was designed by beginners, so it had many
beginners' mistakes. And certainly in some areas that hasn't
improved; take a look at the specification for
java.util.SimpleTimeZone -- it's far from simple, and should have
been split into several subclasses.
Ian said:Try allocating >2GB of memory on a 32 bit system without unsigned
integers.
But it does happen
and support for that feature already exists.
So, why should anyone want to take it away?
What is there to be gained?
Ian said:But what if you don't or can't do 2? 1 should still be an option
and that requires unsigned integers.
The Java designers didn't think there was any need to have unsigned
integers. Probably because they handle the errors well and don't leave as
much room for undefined behavior?
Now this thread has played its self out, it might be interesting to
consider where the protagonist's views on the subject originated.
I started my programming career as a hardware engineer in a team where
we programmed all our own hardware. The company's software developers
were all "application types" who didn't understand hardware. In that
world of registers and buses, just about everything is a bag of bits, so
using unsigned types is the natural way of things. I still enjoy
embedded programming and writing drivers, so I guess I'm stuck in the
use unsigned types mindset.
I bet most of those who dislike signed types are in the "application
types who didn't understand hardware" category.
Ian said:Now this thread has played its self out, it might be interesting to
consider where the protagonist's views on the subject originated.
I started my programming career as a hardware engineer in a team
where we programmed all our own hardware. The company's software
developers were all "application types" who didn't understand
hardware. In that world of registers and buses, just about
everything is a bag of bits, so using unsigned types is the natural
way of things. I still enjoy embedded programming and writing
drivers, so I guess I'm stuck in the use unsigned types mindset.
I bet most of those who dislike signed types are in the "application
types who didn't understand hardware" category.
DaveB said:That there is Java and that it does not have them indicates NO. The
question, then, becomes about what the tradeoffs are. Maybe in a VM
environment the elimination of unsigned integers is easier to accept?
Want to reply to this thread or ask your own question?
You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.