Doubts about pointers

A

amit khan

Hello friends,

I have a couple of doubts about pointers in C.

1) I have a function with prototype is :

my_funtion(char far * char);

I need to pass it an array of char which I have defined as follows:

char my_array[10]

The question is: is it allright to typecast to a far pointer? e.g.
int i=my_function( (char far *) ( (char *) my_array );

I think I am still only passing a near pointer, so how do I force
the segment to be passed also?

2) As expected, (char far *) NULL is NULL, but if I set
char near *x = NULL,
then (char far *)x is non-NULL (the segment is non-zero)!

This seems crazy, it means that if you have a prototyped function
with a far pointer argument (maybe NULL) and you have some
near-mode address from somewhere else (maybe NULL),
you have to handle the NULL case specially. What's going on?

I am using Borland Turbo-C 2.01 if it's relevant.
 
I

Ian Collins

Hello friends,

I have a couple of doubts about pointers in C.

1) I have a function with prototype is :

my_funtion(char far * char);

I need to pass it an array of char which I have defined as follows:

char my_array[10]

The question is: is it allright to typecast to a far pointer? e.g.
int i=my_function( (char far *) ( (char *) my_array );

I think I am still only passing a near pointer, so how do I force
the segment to be passed also?

C (as in standard C) doesn't have near and far pointers. You should
find your answers in your compiler docs, or a group dedicated to its use.
 
S

Seebs

Hello friends,

I have a couple of doubts about pointers in C.

No, you have a couple of doubts about pointers in your implementation.
1) I have a function with prototype is :

my_funtion(char far * char);

You need a DOS or Windows newsgroup.
I am using Borland Turbo-C 2.01 if it's relevant.

It is. Your question is unique to a non-standard feature of some DOS
and Windows implementations which had different kinds of pointers.

-s
 
K

Kenny McCormack

It's about 21 years old. Have you considered using a more modern
C implementation?

Is there anything per se wrong with using something that is old?

Obviously, these people have good solid reasons for using TurboC.

--
No, I haven't, that's why I'm asking questions. If you won't help me,
why don't you just go find your lost manhood elsewhere.

CLC in a nutshell.
 
N

Nick Keighley

No, you have a couple of doubts about pointers in your implementation.



You need a DOS or Windows newsgroup.


It is.  Your question is unique to a non-standard feature of some DOS
and Windows implementations which had different kinds of pointers.

even windows doesn't use far pointers (did it ever?)
 
T

Tom St Denis

Is there anything per se wrong with using something that is old?

Well for one, there is no official support for it. So if you plan on
doing anything professionally [including education] with it you're at
a loss.

Also, DJGPP is freely available for DOS platforms. It also uses GCC
and is modern, supported, and a hell of a better compiler.
Obviously, these people have good solid reasons for using TurboC.

I seriously 100% doubt that.

Tom
 
B

BGB / cr88192

No, you have a couple of doubts about pointers in your implementation.



You need a DOS or Windows newsgroup.


It is. Your question is unique to a non-standard feature of some DOS
and Windows implementations which had different kinds of pointers.

<--
even windows doesn't use far pointers (did it ever?)
-->

Win 3.x and earlier...


16-bit apps in Win95-WinXP (NT-XP via NTVDM), but XP64, Vista, and Win7
dropped support for them...

Win32s (Win 3.x) and Win95 native and newer apps generally used 32-bit
memory, and so didn't make use of far pointers...

far pointers have some (limited) use WRT 32-bit code (on x86, and x86-64),
but this is mostly limited to OS kernel code and occassional edge cases
(mostly accessing thread-local-storage and similar), and hence most 32/64
bit compilers don't bother supporting them AFAIK (usually it is confined to
the occasional piece of ASM or inline ASM or similar).


on x86-64, segments are mostly broken, and left mostly as a vestigial
feature (although FS/GS remain as a special-case hack mostly to help
facilitate TLS/etc...).
 
K

Kenny McCormack

It is.  Your question is unique to a non-standard feature of some DOS
and Windows implementations which had different kinds of pointers.

even windows doesn't use far pointers (did it ever?)[/QUOTE]

Well, that is certainly a question of semantics (what this newsgroup
excels at). Certainly in Win16 days, it did. In Win32, it could be
said that all pointers are "far" - in that they are 32 bits, which in
the terminology of DOS/Windows is "far".

If you spend some time looking through the include files of a Windows C
installation, you will see that "far" is still a concept employed
therein. As far as I can tell (although I certainly didn't look in
great depth), in the current incarnation of things, "far" (and "FAR")
are #define'd to empty, so that the concept essentially drops out at
compile time. But it is still the case that "far" (or "FAR") is part of
the definition of WINAPI (and some other things).

P.S. In a way, your question would have been more poignant if you had
asked if they used "near" pointers. That question could, possibly, be
answered in the negative.

--
(This discussion group is about C, ...)

Wrong. It is only OCCASIONALLY a discussion group
about C; mostly, like most "discussion" groups, it is
off-topic Rorsharch [sic] revelations of the childhood
traumas of the participants...
 
S

Stargazer

I am using Borland Turbo-C 2.01 if it's relevant.
Is there anything per se wrong with using something that is old?

Well for one, there is no official support for it.  So if you plan on
doing anything professionally [including education] with it you're at
a loss.

For two, Turbo-C 2.01 was released even before ANSI C89 Standard.
Also, DJGPP is freely available for DOS platforms.  It also uses GCC
and is modern, supported, and a hell of a better compiler.

Unfortunately, it's not the same thing. DJGPP works only in 32-bit
flat mode and it requires a DOS extender. DOS extender is included in
the distro, but it won't work on pure real-mode DOS and won't work at
all on anything before 386.

Daniel
 
S

Stargazer

It is.  Your question is unique to a non-standard feature of some DOS
Well, that is certainly a question of semantics (what this newsgroup
excels at).  Certainly in Win16 days, it did.  In Win32, it could be
said that all pointers are "far" - in that they are 32 bits, which in
the terminology of DOS/Windows is "far".

No, in Win32 all the pointers are "near". The difference between
"near" and "far" pointers on x86 is not number of bits, but whether
they use explicitly segment:eek:ffset in two registers (segment and base/
index) or only base/index+offset, assuming default segment (DS for
data, SS for stack). Win32 uses only 32-bit offsets.

At least Watcom C for DOS (may be there are others, don't know)
allowed use of "far32" pointers (the pointers appeared effectively 48
bits in size).
If you spend some time looking through the include files of a Windows C
installation, you will see that "far" is still a concept employed
therein.  As far as I can tell (although I certainly didn't look in
great depth), in the current incarnation of things, "far" (and "FAR")
are #define'd to empty, so that the concept essentially drops out at
compile time.  But it is still the case that "far" (or "FAR") is part of
the definition of WINAPI (and some other things).

In Win32 "FAR" presents only for source compatibility with Win 3.x 16-
bit code (Win3.x API was defined with far pointers indeed)
P.S.  In a way, your question would have been more poignant if you had
asked if they used "near" pointers.  That question could, possibly, be
answered in the negative.

Positive since introduction of Win32.

BTW, understanding far and near pointers gives good understanding of
some pointers constraints, in particular:

1) why code and data pointers may be of different size
2) why conversion of pointers to integers and back yields undefined
behavior when such converted pointers are used.

Daniel
 
M

Malcolm McLean

2) As expected, (char far *) NULL is NULL, but if I set
char near *x = NULL,  
then  (char far *)x is non-NULL (the segment is non-zero)!

This seems crazy,
The far and near pointer system has been tacked onto C to get round
the problem of segments. So some things like casts might not work
sensibly. In this case, the compiler is probably giving the pointer
the segment corresponding to the segment used as the base of near
pointers, and treating NULL as the first address within that segment,
forgetting that this creates a non-null far pointer. It's poor design
choice, but the only way round it would be to have a special check for
null in every cast, which would exact a runtime penalty.
 
T

Tom St Denis

Well for one, there is no official support for it.  So if you plan on
doing anything professionally [including education] with it you're at
a loss.

For two, Turbo-C 2.01 was released even before ANSI C89 Standard.
Also, DJGPP is freely available for DOS platforms.  It also uses GCC
and is modern, supported, and a hell of a better compiler.

Unfortunately, it's not the same thing. DJGPP works only in 32-bit
flat mode and it requires a DOS extender. DOS extender is included in
the distro, but it won't work on pure real-mode DOS and won't work at
all on anything before 386.

Who says their target is limited to pre-386?

And frankly that's a dumbshit restriction to put on oneself. I'd
rather have an ARM or PPC, even stripped down than an 8086 anyday.
They win out in all directions, area, MIPS/area, MIPS/power, etc,
etc...

Anyone stupid enough to think that an 8086 is a good design choice in
2010 deserves every last bit of pain they get.

Tom
 
N

Nobody

The far and near pointer system has been tacked onto C to get round
the problem of segments.

Near/far pointers were tacked on for efficiency. They're completely
unnecessary if you use the "huge" model (all pointers have both segment
and offset, and are always normalised), but this has penalties in terms
of both memory consumption and speed.
 
S

Stargazer

2) As expected, (char far *) NULL is NULL, but if I set
char near *x = NULL,  
then  (char far *)x is non-NULL (the segment is non-zero)!

Actually, NULL pointer doesn't have to be represented by all 0 bits. A
"far" NULL pointer may be represented by segment having a special
value and offset 0 or even by segment having *any* value and offset
zero. It just has to compare equal to NULL or (void*)0.
This seems crazy, it means that if you have a prototyped function
with a far pointer argument (maybe NULL) and you have some
near-mode address from somewhere else (maybe NULL),
you have to handle the NULL case specially. What's going on?

Compiler may have bugs or it may use a not-all-0s representation of
far NULL pointer ("far" is a non-portable extension). You may print
out something like:

char near *x = NULL;
printf("Test 1: %d; test 2: %d\n", (char far*)x == NULL, (char far*)x
== (void*)0);

If you get two 1's then the compiler is doing right.

Daniel
 
S

Stargazer

Who says their target is limited to pre-386?

OP did. Turbo C 2.01 produces only 16-bit DOS code (may be with a
third-party linker it may produce 16-bit Windows executables), while
DJGPP produces only 32-bit code for DPMI extenders. Even though 16-bit
code may run on 386 and laters in legacy modes (real and V86), there
are enough differences to consider execution modes different run-time
environments.
And frankly that's a dumbshit restriction to put on oneself.  I'd
rather have an ARM or PPC, even stripped down than an 8086 anyday.
They win out in all directions, area, MIPS/area, MIPS/power, etc,
etc...

Anyone stupid enough to think that an 8086 is a good design choice in
2010 deserves every last bit of pain they get.

I don't advocate or discourage design choices. 8086/186 is at least a
cheap design choice, and in some projects people may choose to assume
that "pain" :) You don't need (I think) to pay fees for implementing
8086 like you would with ARM or MIPS, and its documentation is much
clearer and simpler than three-level manuals of PPC.

Personally I saw only one embedded project that used an x86 (AMD
Geode) in 10 years, and that failed. About everything else that I even
heard of indeed used ARM, PowerPC or MIPS.

Daniel
 
B

Ben Bacarisse

Stargazer said:
Actually, NULL pointer doesn't have to be represented by all 0 bits. A
"far" NULL pointer may be represented by segment having a special
value and offset 0 or even by segment having *any* value and offset
zero. It just has to compare equal to NULL or (void*)0.

It should probably also compare equal to 0 (with no decoration). I say
"probably" because far and near pointers are not standard, so the
provisions of pointer comparisons in the standard need not apply to
them. However, a consistent implementation would make x == NULL, x ==
(void *)0 and x == 0 all behave the same way.

<snip>
 
T

Tom St Denis

OP did. Turbo C 2.01 produces only 16-bit DOS code (may be with a
third-party linker it may produce 16-bit Windows executables), while
DJGPP produces only 32-bit code for DPMI extenders. Even though 16-bit
code may run on 386 and laters in legacy modes (real and V86), there
are enough differences to consider execution modes different run-time
environments.

It's entirely possible they're running on a post-286 computer and just
happen to have a copy of TC 2. In which case ... I stand by what I
said.

Thing is TC2 has been EOL'ed long ago. So from a business point of
view it's a non-starter. There is no support for it. There are no
upgrades, it's not even C90 compliant. Might as well be using BYTE
magazine Small-C for all its worth.
I don't advocate or discourage design choices. 8086/186 is at least a
cheap design choice, and in some projects people may choose to assume
that "pain" :) You don't need (I think) to pay fees for implementing
8086 like you would with ARM or MIPS, and its documentation is much
clearer and simpler than three-level manuals of PPC.

Except that you're still shooting yourself in the foot in terms of
efficiency or best area usage. A very low end ARM for instance is
ridiculously small and still more powerful than an 8086 [from any
manufacturer]
Personally I saw only one embedded project that used an x86 (AMD
Geode) in 10 years, and that failed. About everything else that I even
heard of indeed used ARM, PowerPC or MIPS.

Because x86 is useless for embedded space. It's far far far too
inefficient of an ISA. Thing is too if you started to throw the tech
behind most x86 processors [out of order execution, multiple
pipelines, etc] at PPC or ARM they start becoming very attractive in
the high end space.

Tom
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Similar Threads

Doubts about pointers 71
Help with pointers 1
Sizes of pointers 233
Question about pointers 3
differentiating between pointers - "primary"? 9
Doubts about free() 9
pointers 25
Pointers 16

Members online

No members online now.

Forum statistics

Threads
473,996
Messages
2,570,238
Members
46,826
Latest member
robinsontor

Latest Threads

Top