far pointer

C

CBFalconer

Joe said:
CBFalconer wrote:
.... snip silly mechanism for finding ptr_diff ...
Sounds like the Safety mechanism still needs work. The Security side is
rock solid. There is no way to find out where the launching missles are
coming from or where they are going. :)

How can you say that. I am told that this was a Microsoft system,
and they calculated the chance of uncaught error as 1 in 2 ** 128,
with a mean time to failure of 300 millenia.
 
C

Charles Richmond

CBFalconer said:
... snip silly mechanism for finding ptr_diff ...

How can you say that. I am told that this was a Microsoft system,
and they calculated the chance of uncaught error as 1 in 2 ** 128,
with a mean time to failure of 300 millenia.
If you really meant a Microsoft system, the MTBF is more like 300 milliseconds.

--
+----------------------------------------------------------------+
| Charles and Francis Richmond It is moral cowardice to leave |
| undone what one perceives right |
| richmond at plano dot net to do. -- Confucius |
+----------------------------------------------------------------+
 
J

Jasen Betts

What he said. But, yes, they can be different, but they will ultimately
evaluate to the same address. How? One can be normalized and the other
non-normal, or you can have two non-normal with different segment/offset
combinations that result in the same absolute address.

or worse the same physical memory (or logical memory) could be accessed via two different addresses...
I think you can even do that with near pointers (and shared memory calls) in linux....

luckily the standard doesn't require that pointers with different binary representations address different
memory..
 
J

Jasen Betts

Keith said:
I'm reading Malcolm as being wrong.
The restriction that pointers must be calculated within the object,
has nothing to do with comparing pointers for equality.

a comparison is a calculation.

Bye.
Jasen
 
K

Keith Thompson

Jasen Betts said:
a comparison is a calculation.

Bye.
Jasen

I didn't write that; pete did.

Yes, a comparison is a calculation, but that's not relevant. It's
perfectly legal to compare pointers to unrelated objects for equality.
(It computes a 0 or 1, not a pointer value.) Comparing pointers to
distinct objects for "<", "<=", ">", or ">=" invokes undefined
behavior.
 
J

Jasen Betts

It is if that's how the implementation has to do pointer comparisons
in order to satisfy the requirements of ANSI C. Now, if you can't
GET two pointers that point to the same place but have unequal bit
patterns without invoking undefined behavior, it's not an issue.

Therefore It's not an issue. because you can't get two such pointers
without invoking undefined behavior.
This might happen if pointer math always does the normalization.
It might also happen if pointer math operates on only the offsets,
and objects can be no bigger than 64k.
But if you're trying to support large objects, things get messier.

That's what the huge memory model was for, and it gave you a 32-bit
size_t and automatic normalisation of pointers.

Huge model wasn't the only way to have huge objects, for example you
can use farmalloc() in one of the other memory models but as soon as
you do that you're outside the standard.

Bye.
Jasen
 
G

Gordon Burditt

Because having to normalize pointers for comparison
Therefore It's not an issue. because you can't get two such pointers
without invoking undefined behavior.

That depends entirely on how the code is generated, particularly
how pointer math is done. Occasional (as opposed to always or
never) normalization could get you in that situation. The implementor
needs to make decisions consistent with the requirements of ANSI
C. Things like how pointer math is done, how pointer comparisons
are done, and the maximum size of objects have to work together to
do things right. You can't have fast pointer math, fast pointer
comparisons, and objects as big as you want. You also have to take
into accounts any short cuts done as a way of speeding up the code.
If you pull normalization out of a loop, given that it's a known
short loop and the pointer can't possibly overflow, then you have
to deal with that if the pointer is passed from inside the loop to
another function, which CANNOT assume that "pointers are always
normalized" any more, unless the code normalizes pointers before
they are passed to a function.
That's what the huge memory model was for, and it gave you a 32-bit
size_t and automatic normalisation of pointers.

The implementor has to decide whether he wants "large" or "huge" model,
or something in between, and probably wants to be able to generate
faster code where enough is known about the situation to justify it.
Huge model wasn't the only way to have huge objects, for example you
can use farmalloc() in one of the other memory models but as soon as
you do that you're outside the standard.

If you are trying to come up with a "nice" implementation, maybe
you don't want the introduction of farmalloc() to break pointer
comparison. So you generate one set of code where the pointers
might have come from farmalloc() , and you generate (probably faster)
code where you know the pointers couldn't have come from farmalloc()
(e.g. taking the address of an auto variable). You might also
take the code for farmalloc() and rename it malloc().

Gordon L. Burditt
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
474,169
Messages
2,570,915
Members
47,456
Latest member
JavierWalp

Latest Threads

Top