Does your C compiler support "//"? (was Re: using structures)

D

Dan Pop

In said:
It might not be inappropriate to call the programmer dumb
if the program *could* have been portable, but the programmer
used int where long was called for, saying that he didn't
care about the matter.

Maybe he cared about other matters, such as code performance on
the platforms the program was intended to be portable to. Since
implementations with 64-bit longs are not unheard of there days,
using long for a 32-bit integer type may be wasteful, especially in
the case of very large arrays.

And hiding the actual types behind typedef's often creates more problems
than it solves.

Dan
 
T

Thad Smith

Dan said:
It is the C standard itself that makes porting code to embedded
systems a pain, by not requiring enough *standard* library support
for freestanding implementations. If the code contains a standard
library function call, it is, by definition, non-portable to such
systems/implementations.

Of the compilers which target embedded systems, almost all will include
a library which is a subset of the standard library. They usually
include the string and character functions, and conversion functions
(sprintf, sscanf). Math routines are usually included. Limiting a
compiler to not provide any library support doesn't make marketing
sense, so it's there, to the extent which the vendor thinks is required
to be taken seriously.

I haven't heard of a freestanding implementation which limits itself to
the Standard minimum. Perhaps this suggests, as you imply, that the bar
should be raised in this area for freestanding compliant
implementations.

Thad
 
P

Paul Eggert

When you use an API, you use the types chosen by its designers. The
issue is whether you should *choose* to make an object of type int
when you know it might hold an oversize value on some machines.

Fair enough, but in the case of POSIX (or GNU or whatever), the API
already specifies that the int width is at least 32 bits. There is
nonzero cost worrying about smaller machines. I don't want to pay
that cost (life is too short!), and this is a reasonable position to
take.
If I have to, say, read 100,000 bytes under Unix, my portable
program will do it in a series of reads each less than 32,767 bytes
long. That takes more code and more execution time than a single
read.

The execution time doesn't matter that much these days. But the code
does. The 32767-byte read code will be trickier to maintain than the
single-read code. And there will be cases (involving short reads)
where the 32767-byte version cannot emulate the single-read version.
In the POSIX world, it's really a no-brainer: programmers should not
waste one whit of their valuable time worrying about 16-bit hosts.
 
D

Douglas A. Gwyn

Dan said:
It is the C standard itself that makes porting code to embedded
systems a pain, by not requiring enough *standard* library support
for freestanding implementations. If the code contains a standard
library function call, it is, by definition, non-portable to such
systems/implementations.

First, that has nothing to do with the size of int.

Second, you're wrong; a substantial amount of the standard C
library is required for conformance of freestanding implementations.
There has also been work toward a TR aimed specifically at embedded
systems, covering two main areas: fixed-point arithmetic and I/O
hardware access.

It happens that I use a huge amount of portable C code on my current
embedded-processor target. I develop and test it using a much more
comfortable hosted environment, where I also use it for a variety of
hosted applications, and port it to the embedded processor simply by
recompiling for the different target.
 
D

Douglas A. Gwyn

Dan said:
Maybe he cared about other matters, such as code performance on
the platforms the program was intended to be portable to. Since
implementations with 64-bit longs are not unheard of there days,
using long for a 32-bit integer type may be wasteful, especially in
the case of very large arrays.

We have said:
And hiding the actual types behind typedef's often creates more problems
than it solves.

Not in my experience. Refusing to use typedefs causes more problems.
 
D

Douglas A. Gwyn

Francis said:
As C Standards 'experts' we may have opinions about how POSIX builds on
C but we should restrain ourselves from casting opprobrium on those who
know how POSIX works and are happy that that is where their code is
targeted.

Actually I was one of the handful of people who developed the original
POSIX.1 specification (IEEE P1003 at the time). Also I was using Unix
long before the effort to standardize its characteristics via POSIX,
including on platforms with 16-bit words. I also spend most of my
software development time using POSIX-compatible platforms. P1003
generally used typedefs (e.g. uid_t), reserving raw "int" mainly for
cases where 16 bits was plenty. According to what we have been hearing
in this thread, some very questionable decisions have since been made.
My involvement in C standardization hardly disqualifies me from
questioning those decisions.
It is part of an experts responsibility to understand the requirements
of the job they are doing and it is not dumb or stupid to take advantage
of specific domain and target platform knowledge.

It is dumb to impose unnecessary requirements based on misconceptions.
 
D

Douglas A. Gwyn

Paul said:
In the POSIX world, it's really a no-brainer: programmers should not
waste one whit of their valuable time worrying about 16-bit hosts.

Look at it from the other direction: POSIX is trying to tell the
Motorola 68000 C compiler implementor how big an "int" must be.
That is a tail wagging the dog.
 
P

Paul Eggert

Look at it from the other direction: POSIX is trying to tell the
Motorola 68000 C compiler implementor how big an "int" must be.

No, POSIX is standardizing widespread existing practice.

For many years the POSIX standard allowed 16-bit int. But few if any
implementers used that freedom, while many POSIX applications were
written assuming 32-bit (or wider) int.

In retrospect it was probably a mistake for POSIX to allow 16-bit int.
It was an implementation freedom that didn't buy POSIX programmers any
real portability, and it confused programmers into thinking that
16-bit int was a reasonable possibility for POSIX platforms.

In 2001 POSIX was changed to reflect widespread existing practice, by
requiring 32-bit (or wider) int. This fixed the problems mentioned
above.

Requiring 32-bit (or wider) int was no hardship for POSIX or
quasi-POSIX implementations based on the Motorola 68000, since they
were all using 32-bit int anyway.
 
D

Douglas A. Gwyn

Paul said:
For many years the POSIX standard allowed 16-bit int. But few if any
implementers used that freedom, while many POSIX applications were
written assuming 32-bit (or wider) int.

In retrospect it was probably a mistake for POSIX to allow 16-bit int.
It was an implementation freedom that didn't buy POSIX programmers any
real portability, and it confused programmers into thinking that
16-bit int was a reasonable possibility for POSIX platforms.

In 2001 POSIX was changed to reflect widespread existing practice, by
requiring 32-bit (or wider) int. This fixed the problems mentioned
above.

Since there *was* no problem mentioned above, it appears
that in 2001 it was decided to retroactively bless those
instances of existing programmer carelessness. I
suppose that similar thinking will cause some version
of the POSIX standard to require that long int be
limited to 32 bits, on the grounds that there have been
many POSIX programs written with that assumption?

Some of us actually maintain 16-bit Unix platforms
where POSIX conformance would have been quite useful,
but not now that an unnecessary element has been forced
into the specification.

It is not a surprise that the promoters of such a change
are also the same people who squawked loudly about the
introduction of standardization for the type long long.
One gathers that they are unhappy with the thought that
there is any variety in machine architectures, and are
eager to legislate the world into conforming to their
limited machine model.
 
F

Francis Glassborow

Douglas A. Gwyn said:
Since there *was* no problem mentioned above, it appears
that in 2001 it was decided to retroactively bless those
instances of existing programmer carelessness. I
suppose that similar thinking will cause some version
of the POSIX standard to require that long int be
limited to 32 bits, on the grounds that there have been
many POSIX programs written with that assumption?

Some of us actually maintain 16-bit Unix platforms
where POSIX conformance would have been quite useful,
but not now that an unnecessary element has been forced
into the specification.

It is not a surprise that the promoters of such a change
are also the same people who squawked loudly about the
introduction of standardization for the type long long.
One gathers that they are unhappy with the thought that
there is any variety in machine architectures, and are
eager to legislate the world into conforming to their
limited machine model.


However note the WG14's decision to bless long long was argued strongly
on the basis of existing practice even though exactly the same arguments
were applied to rejecting it. Note that POSIX adds a requirement for the
platforms it deals with and it remains possible for a compiler to
conform to both standards in this regard, while WG14's decision re long
long made it impossible for POSIX to remove the type and still allow for
conforming C compilers on that platform.

No, let us not argue this case all over again but we should all
understand that a derived standard can strengthen a requirement but not
weaken one.
 
D

Dan Pop

In said:
First, that has nothing to do with the size of int.

Not directly: practically all the current implementations having 16-bit
int's are freestanding. And there is no way to write a C program that
is portable to freestanding implementations: what is the name of the
function called at program startup?
Second, you're wrong; a substantial amount of the standard C
library is required for conformance of freestanding implementations.

Let's have a look at this "substantial" amount of the standard C library
that is required for conformance of freestanding implementations:

C89: <float.h>, <limits.h>, <stdarg.h> and <stddef.h>.
C99: <float.h>, <iso646.h>, <limits.h>, <stdarg.h>, <stdbool.h>,
<stddef.h> and <stdint.h>.

In either case, only headers that don't provide access to library
*functions*, which is what I was talking about in my previous post.

You may want to engage your brain before your next reply to any of my
posts, to avoid making a fool of yourself.
There has also been work toward a TR aimed specifically at embedded
systems, covering two main areas: fixed-point arithmetic and I/O
hardware access.

How would that TR improve the portability between hosted and freestanding
implementations? Which is what we're talking about, isn't it?

Due to the brain damage in the C standard, it is impractical to produce
even general purpose libraries that are portable to both kinds of
implementations: you'd have to implement all kinds of standard library
functions for the benefit of the freestanding implementations, which
may not provide them.

Dan
 
D

Dan Pop

In said:
We have <stdint.h> for that.

Not in the real world, where the implementations blissfully ignore the C99
standard.
Not in my experience. Refusing to use typedefs causes more problems.

You've probably never been careful enough when writing your code and it
worked by accident, rather than by design. Code *properly* written to
use <stdint.h> style typedef's is a *lot* less readable than code that
uses the basic C types.

Let's take the most common example: size_t. If an implementation
defines it as unsigned short (the standard allows it), arithmetic
performed on two size_t operands is evaluated using signed arithmetic
and generates an int result, which is not what most people expect.
Their code works because no real implementation defines size_t as
anything less than unsigned int, not because the C standard guarantees
that it works (i.e. it works by accident).

I'm all for sticking to what the C standard guarantees, when this can be
done at no cost. When this is not the case, the good programmer has to
make a careful analysis of the costs and benefits involved by this
approach and switch to the guarantees provided by the *relevant* real
life implementations if the costs exceed the benefits.

When writing a UNIX application these days, the benefits of considering
the possibility of a less than 32-bit int are zilch. The code is very
likely to contain API calls that are not available on the ancient 16-bit
UNIX versions. I'm not aware of any 16-bit UNIX claiming SUS conformance.

Dan
 
D

Douglas A. Gwyn

Francis said:
... Note that POSIX adds a requirement for the
platforms it deals with and it remains possible for a compiler to
conform to both standards in this regard, while WG14's decision re long
long made it impossible for POSIX to remove the type and still allow for
conforming C compilers on that platform.

The issue is one of partition of function.
The C standard, at the level of basic types, addresses
what is largely a matter of machine architecture, and
that is the right level to be making decisions about
types within the programming language.
POSIX can easily layer on top of that; there are
sufficient standard C facilities for POSIX to use to
specify what types *existing in the language
implementation* shall be ued for the POSIX interface.
Instead, some appear to think that POSIX ought to
dictate the compiler implementation choices at the
architectural level. That is simply wrong-minded.
No, let us not argue this case all over again but we should all
understand that a derived standard can strengthen a requirement but not
weaken one.

Of course they *can* impose additional requirements upon
the C implementation, but generally speaking what that
means is merely that POSIX conformance is not feasible
on platforms where there is no valid reason why it should
not be. Some restrictions are simply unnecessary and
ill-advised.
 
D

Douglas A. Gwyn

Dan said:
Due to the brain damage in the C standard, it is impractical to produce
even general purpose libraries that are portable to both kinds of
implementations: you'd have to implement all kinds of standard library
functions for the benefit of the freestanding implementations, which
may not provide them.

If your point is that much of <string.h> could be provided
with a freestanding implementation, sure, and the hosted
portion of the C standard provides guidance as to that
interface. Virtually every vendor of development platforms
for standalone targets provides as much as they can of the
hosted portion of the library, sometimes even including
printf. Because there is no problem in practice, there has
been remarkably little pressure to tweak the standard to
include those additional functions in the requirements for
*all* freestanding implementations. Feel free to propose
it (again) during the initial phase of work toward C0x.
 
D

Douglas A. Gwyn

Dan said:
Not in the real world, where the implementations blissfully ignore the C99
standard.

Yes, in the real world. I provided a free implementation
many years ago that you could use where implementations
fail to provide it. In fact Unix systems have had
<inttypes.h> for quite some time, so if you don't mind
the extra macros and declarations you could use that.

Ignoring the available tools is not the mark of a good
craftsman.
When writing a UNIX application these days, the benefits of considering
the possibility of a less than 32-bit int are zilch. The code is very
likely to contain API calls that are not available on the ancient 16-bit
UNIX versions. I'm not aware of any 16-bit UNIX claiming SUS conformance.

What, like select(2)? We have that on our 16-bit Unix
systems.

Anyway, the point is not that *all* Unix code needs to
be portable to all comparable systems, but that *much*
code developed with Unix in mind would be of much more
benefit if it had been coded without making unnecessary
assumptions about such things as the width of type int.
Over the decades I've moved a mountain of C code among
various platforms, from 8-bit micros through Cray
supercomputers, from dedicated controllers through
windowing operating systems, with Unix/POSIX involved
in many cases. I've learned a lot about which coding
choices are essential and which ones are negligent,
which restrictions aid porting and which interfere.
Assuming that int has at least 16 bits is in the
category of restrictions that interfere with code
portability.
 
D

Douglas A. Gwyn

Dan said:
>
>
> Not in the real world, where the implementations blissfully ignore the C99
> standard.


Yes, in the real world. I provided a free implementation
many years ago that you could use where implementations
fail to provide it. In fact Unix systems have had
<inttypes.h> for quite some time, so if you don't mind
the extra macros and declarations you could use that.

Ignoring the available tools is not the mark of a good
craftsman.
> When writing a UNIX application these days, the benefits of considering
> the possibility of a less than 32-bit int are zilch. The code is very
> likely to contain API calls that are not available on the ancient 16-bit
> UNIX versions. I'm not aware of any 16-bit UNIX claiming SUS
conformance.


What, like select(2)? We have that on our 16-bit Unix
systems.

Anyway, the point is not that *all* Unix code needs to
be portable to all comparable systems, but that *much*
code developed with Unix in mind would be of much more
benefit if it had been coded without making unnecessary
assumptions about such things as the width of type int.
Over the decades I've moved a mountain of C code among
various platforms, from 8-bit micros through Cray
supercomputers, from dedicated controllers through
windowing operating systems, with Unix/POSIX involved
in many cases. I've learned a lot about which coding
choices are essential and which ones are negligent,
which restrictions aid porting and which interfere.
Assuming that int has at least 32 bits is in the
category of restrictions that interfere with code
portability.
 
D

Dan Pop

In said:
Yes, in the real world. I provided a free implementation
many years ago that you could use where implementations
fail to provide it.

How does your free implementation check that the typedef's it provides
match *all* the requirements imposed by the C standard (no padding bits,
two's complement representation for exact-width types)? How does it
provide int_least64_t on a platform with no native 64-bit (or longer)
integer type?
In fact Unix systems have had
<inttypes.h> for quite some time, so if you don't mind
the extra macros and declarations you could use that.

Could I? The issue under discussion requires int_least32_t and the
Ignoring the available tools is not the mark of a good
craftsman.

Ignoring the limits of the available tools is the mark of the fool.
What, like select(2)? We have that on our 16-bit Unix
systems.

And is that enough to be SUS conformant?

Dan
 
D

Dan Pop

In said:
If your point is that much of <string.h> could be provided
with a freestanding implementation, sure, and the hosted
portion of the C standard provides guidance as to that
interface. Virtually every vendor of development platforms
for standalone targets provides as much as they can of the
hosted portion of the library, sometimes even including
printf.

Which *guarantees* exactly zilch WRT the portability of code including
other headers than the ones required for freestanding implementations.
Because there is no problem in practice, there has

There is no problem with assuming at least 32-bit int's for the
current hosted implementations in practice, either. And programs
developed for such implementations don't get ported to freestanding
implementations in practice, either. So, make up your mind: do we
place our discussion in the context of the C standard or outside it?
been remarkably little pressure to tweak the standard to
include those additional functions in the requirements for
*all* freestanding implementations.

Then, the C code that is *guaranteed* to be portable to *all*
freestanding implementations is not allowed to include other headers
than the ones required for freestanding implementations. It's as
simple as that. There is little point in invoking the C standard once
you have to use features NOT guaranteed by the C standard. Which means
the bulk of Clause 7 when talking about freestanding implementations.
Feel free to propose
it (again) during the initial phase of work toward C0x.
^^^^^
If the committee was stupid enough to reject it the first time, why should
I expect a different decision the second time?

Except for <assert.h>, certain parts of <stdio.h> and the dynamic
memory allocation functions, there is precious little in the C89 library
specification that couldn't be supported on freestanding implementations
or that wouldn't be useful on them. They'd also greatly benefit from the
C99 single precision additions to <math.h>, double precision being seldom
used on most embedded control applications.

Dan
 
P

Paul Eggert

Since there *was* no problem mentioned above

It wasn't a practical problem, true. All the POSIX implementations
had 32-bit (or wider) integers, and all the programs that additionally
assumed 32-bit (or wider) integers were portable in practice. Then
only problem was that the standard didn't reflect reality, and this
had the potential of confusing programmers who were trying to
faithfully code to the standard, and who didn't know that all the real
POSIX systems had 32-bit or wider int. This source of confusion was
fixed by fixing the POSIX standard.
I suppose that similar thinking will cause some version of the POSIX
standard to require that long int be limited to 32 bits

Not at all. Many POSIX implementations have 64-bit long. POSIX even
allows long (and short, and int) to be wider than 64 bits.
Some of us actually maintain 16-bit Unix platforms
where POSIX conformance would have been quite useful,

You had many years to do it, and you never got around to doing it.
Time has passed you by. As a practical matter, there's no way that
you could implement POSIX 1003.1-2001 on a 16-bit host anyway, even if
the 32-bit int requirement were removed. There just wouldn't be
enough room to do it practically.
It is not a surprise that the promoters of such a change
are also the same people who squawked loudly about the
introduction of standardization for the type long long.
One gathers that they are unhappy with the thought that
there is any variety in machine architectures,

That cheap shot is entirely unworthy of you. The people in question
are quite aware of architectural variance.
 
K

Keith Thompson

Paul Eggert said:
You had many years to do it, and you never got around to doing it.
Time has passed you by. As a practical matter, there's no way that
you could implement POSIX 1003.1-2001 on a 16-bit host anyway, even if
the 32-bit int requirement were removed. There just wouldn't be
enough room to do it practically.

If you assume that a "16-bit host" can only have 16-bit addresses (and
therefore no more than 64 kbytes of memory -- perhaps 64k each I and D
spaces), that's probably true. But it can make sense for a system to
have 16-bit ints and larger addresses. The 68000 is one example; I
think some of the earlier members of the x86 family also qualify.

Of course you can always make "int" 32 bits even if the machine's
natural word size is 16 bits, though you have to bend the standard's
recommendation that 'A "plain" int object has the natural size
suggested by the architecture of the execution environment'.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
474,146
Messages
2,570,832
Members
47,374
Latest member
EmeliaBryc

Latest Threads

Top