Does your C compiler support "//"? (was Re: using structures)

C

Chris Hills

Paul Eggert said:
It's not at all dumb to assume that int is at least 32 bits wide.
POSIX 1003.1-2001 (another ISO standard) requires it, as do the GNU
coding standards. Lots of portable C code safely assumes it.

No doubt there are still some C platforms with 16-bit int, or other
widths less than 32 bits, but such platforms are not of much interest
these days to authors of a wide class of portable software.

Whilst this may be true for some programmers the vast majority of
processors in use today are NOT 32bit. They are 8 bit. AFAIk 1 in 3
processors currently in use on the planet (and above it) are 8051
types!! there are plenty of other 8 bit types in wide spread use
(Motorola HC, AVR, PIC etc) . Then follows the 16 bit types and then the
32 bit types.

When I did a paper at an embedded conference I did a straw poll there
were people using 4 to 128 bit processors. I think it was in order of
max users:- 8, 16, 32, 64, 4, 128

It is only desk top programmers who have a blinkered view of what is in
use. BTW I 20 years of SW engineering on 8, 16, 32 and 64 bit systems I
have yet to write a 32 bit windows program or for that matter a program
what was required to be portable other than to other CPU of the same bit
size.

/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
\/\/\/\/\ Chris Hills Staffs England /\/\/\/\/\
/\/\/ (e-mail address removed) www.phaedsys.org \/\/
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/
 
F

Francis Glassborow

Chris Hills said:
It is only desk top programmers who have a blinkered view of what is in
use. BTW I 20 years of SW engineering on 8, 16, 32 and 64 bit systems I
have yet to write a 32 bit windows program or for that matter a program
what was required to be portable other than to other CPU of the same bit
size.

Not necessarily a blinkered view. If I am working on a desktop
application it is not unreasonable to assume that I only need to
consider porting it to other desktop OSs.

Please also note that a great deal of embedded code does not compile for
desktop systems unless you use a non-conforming compiler.

Horses for courses.
 
C

Chris Hills

Francis Glassborow said:
Not necessarily a blinkered view.

The initial post was stated as a a global truth. My point was that it
is not and that the majority are NOT 32 bit systems.
It's not at all dumb to assume that int is at least 32 bits wide.
POSIX 1003.1-2001 (another ISO standard) requires it, as do the GNU
coding standards. Lots of portable C code safely assumes it.

No doubt there are still some C platforms with 16-bit int, or other
widths less than 32 bits, but such platforms are not of much interest
these days to authors of a wide class of portable software.

Not only are there "still some C platforms with 16-bit int" there are
some 8-bit ones as well that have GNU compilers.
therefore it is dumb to assume that "the int is at least 32 bits wide"
If I am working on a desktop
application it is not unreasonable to assume that I only need to
consider porting it to other desktop OSs.

True. In fact I would hope that if you are working on a desktop app you
would be writing code that is as standard C and as portable as possible.

However original premise did not say PC based desktop apps. Mac are of
course 16-64 Bit. Dual 64 Bit for the new G5... compare that to the
average "desktop PC" of 20 years ago, the Spectrum :)

You have assumed "desktop" that the original author did not. You I
agree with him I do not.
Please also note that a great deal of embedded code does not compile for
desktop systems unless you use a non-conforming compiler.

As you well know I am an embedded programmer. Though that ranges from
"embedded" sparc based Unix systems to 8051 smart-cards

I would think that there is virtually no embedded SW that will compile
to a desktop system.... Unix and Linux being the exception where they
are used in both desktop and embedded systems...
Horses for courses.

However some people use derivatives of horses.

In general terms it is dumb to assume an int is 32 bits. You can only
assume that Int is at least 16 bits. Unless you define the area you are
talking about such as "modern desktop PC's"

Now, lets really throw in a barrel of red herrings and mention Endisum
and 1 & 2' compliment :)


/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
\/\/\/\/\ Chris Hills Staffs England /\/\/\/\/\
/\/\/ (e-mail address removed) www.phaedsys.org \/\/
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/
 
C

Christian Bau

Chris Hills said:
In general terms it is dumb to assume an int is 32 bits. You can only
assume that Int is at least 16 bits. Unless you define the area you are
talking about such as "modern desktop PC's"

You can always write something like this after including the right
header files:

typedef signed char int8;
typedef unsigned char uint8;
typedef signed short int16;
typedef unsigned short uint16;

#if ((UINT_MAX >> 15) >> 15) >= 3
typedef int int32;
typedef unsigned int uint32;
#else
typedef long int32;
typedef unsigned long uint32;
#endif

#if ((ULONG_MAX >> 31) >> 31) >= 3
typedef long int64;
typedef unsigned long uint64;
#else
typedef long long int64;
typedef unsigned long long uint64;
#endif

This will portably define a typedef for at least 8, 16, 32 and 64 bit
signed and unsigned integer. There is a slight chance that int32 and
int64 could end up being the same type (if int < 32 bit and long >= 64
bit), but I think that is unlikely. And you must be aware that the
unsigned types might have larger sizes then their name claims.

So there is really no necessity to write non-portable code that makes
non-portable assumptions.
 
T

Thad Smith

I would think that there is virtually no embedded SW that will compile
to a desktop system.... Unix and Linux being the exception where they
are used in both desktop and embedded systems...

The primary reason for me that embedded code does not compile and run on
desktop systems is that the I/O is different. Much embedded software
includes custom drivers for I/O and implements interrupt handlers.
Using fread() and fwrite() is rarely feasible for single bit ports,
timers, and other special features. Many times these drivers are
written in C. Of course the drivers are specific to the particular
target, including the choice of external devices. Properly partitioned,
these details can be contained.

I have written substantial 8-bit 8051 user interface code that I can
debug on a 16-bit DOS system by using different I/O modules and top
level drivers. It requires avoiding the quirky extensions when feasible
and using judicious typedefs and defines.

Short algorithms, such as date computation, as easy to make portable.

Thad
 
D

Douglas A. Gwyn

Christian said:
You can always write something like this after including the right
header files: ...

Or #include the right header file in the first place: <stdint.h>.
If you don't have one for your system, free implementations exist.
 
P

Paul Eggert

The initial post was stated as a a global truth.

I stated merely that platforms with narrower-than-16-bit int "are not
of much interest these days to authors of a wide class of portable
software." That much is undeniable, as I can cite a wide class of
portable software that assumes 32-bit (or larger) int.

Obviously there are still programmers interested in 16-bit-int
processors. My hat's off to them, even if I don't choose to join
their ranks. Conversely, though, it's silly to call a programmer
"dumb" simply because he's not interested in porting his program to
16-bit-int processors.
 
D

Douglas A. Gwyn

Paul said:
Conversely, though, it's silly to call a programmer
"dumb" simply because he's not interested in porting his program to
16-bit-int processors.

It might not be inappropriate to call the programmer dumb
if the program *could* have been portable, but the programmer
used int where long was called for, saying that he didn't
care about the matter.
 
T

Tim Woodall

You can always write something like this after including the right
header files:
#if ((ULONG_MAX >> 31) >> 31) >= 3
typedef long int64;
typedef unsigned long uint64;
#else
typedef long long int64;
typedef unsigned long long uint64;
#endif

This will portably define a typedef for at least 8, 16, 32 and 64 bit
signed and unsigned integer. There is a slight chance that int32 and
int64 could end up being the same type (if int < 32 bit and long >= 64
bit), but I think that is unlikely. And you must be aware that the
unsigned types might have larger sizes then their name claims.

So there is really no necessity to write non-portable code that makes
non-portable assumptions.

And how do you extend this to cope with printf and friends?

Tim.
 
B

Brian Inglis

Gee, there must be an awful lot of code containing that.

More likely to look like:

#define MINUS32K 0x8000

int a = MINUS32K;

Thanks. Take care, Brian Inglis Calgary, Alberta, Canada
 
R

Ross Ridge

Ross said:
That's not the only way to make assumptions about twos-complement
arithmetic. A trivial example is having "int a = -32768;" in your code
is going to make your program not strictly conforming.

Douglas A. Gwyn said:
Gee, there must be an awful lot of code containing that.

As I said it was just a trivial example, and was sufficient to prove the
statement I made in the first sentence quoted above. I didn't think
you'ld able to see the implications of this example, but regarldless
your sarcasm is misplaced.

Brian Inglis said:
More likely to look like:

#define MINUS32K 0x8000

int a = MINUS32K;

Yes, this would be more a realistic way writing the example. I've also
seen code like:

#define ERROR_CODE -32768

int foo(void) {
/* ... */
return ERROR_CODE;
}

Ross Ridge
 
R

Ross Ridge

Ross said:
Saying programmers shouldn't do dumb things like assuming 32-bit ints,

Paul Eggert said:
It's not at all dumb to assume that int is at least 32 bits wide.
POSIX 1003.1-2001 (another ISO standard) requires it, as do the GNU
coding standards. Lots of portable C code safely assumes it.

Well, I said "assuming 32-bit ints", not assuming at least 32-bit ints.
Though, one of the problems with this later less restrictive assumption is
that it often leads people to make the former more restrictive assumption.
The GNU coding standard inadvertently resulted in a fair bit of GNU code
being written that wasn't portable to 64-bit CPUs.

Also, I characterized the assumption as dumb largely to concede the point
because I thought there were much better examples of why I think that
the definition of a strictly-conforming program isn't of any practical
use as a coding standard. However, I do think it's still not very wise
to assume int is at least 32 bits wide in code intended to portable
because it's not *that* hard to not make this assumption. At least it's
not hard for me, and while, as I already said, it wouldn't be as easy
for someone who's never used a 16-bit implementation C before, I don't
think it would that hard for anyone. Especially since they need to be
considering the fact that int might be 64-bits wide at the same time.
No doubt there are still some C platforms with 16-bit int, or other
widths less than 32 bits, but such platforms are not of much interest
these days to authors of a wide class of portable software.

Well, I recently wrote 16-bit C code for a current desktop OS, but I'll
admit that's quite unusual. (And it wasn't portable anyways...)

Ross Ridge
 
P

Paul Eggert

The specific issue that Doug Gwyn addressed was the use of int
objects to store numbers greater than 2^15. It's a stupid thing to
do there's a well known portable alternative -- use long instead.

Not at all. Many APIs use 'int' -- including both the C Standard and
POSIX -- and people aren't going to change these APIs to use 'long'
simply because the C Standard says that 'int' might be 16 bits. If I
use such an interface, I'll use 'int' (obviously). And if I happen to
need to pass the number 100000 to such an interface, I'll do it
without a second thought. This will give me a minor competitive
advantage over the poor programmer who's stuck worrying about
portability to 16-bit hosts.
it is important to have some notion of the tradeoffs between initial
development cost and long term prospects for reusability through
porting, lest one err too much toward either too much or too little
portability.

Absolutely. And that's exactly why I don't worry about 16 bit hosts
any more. Other people may have different tradeoffs. That's fine.
I don't call them "dumb" or "stupid".
 
P

Paul Eggert

At said:
The GNU coding standard inadvertently resulted in a fair bit of GNU code
being written that wasn't portable to 64-bit CPUs.

Really? That's news to me. The GNU coding standards go to some
length to say that code must be portable among a wide variety of
system types. It says that it is "absolutely essential".

Perhaps you're thinking about some of the older BSD code? A lot of
that code indeed assumed that int and long both had to be 32 bits.
But most of that code is long dead (or improved) by now, killed off by
ports to 64-bit hosts like the Alpha in the early 1990s.
 
D

Douglas A. Gwyn

Paul said:
Not at all. Many APIs use 'int' -- including both the C Standard and
POSIX -- and people aren't going to change these APIs to use 'long'
simply because the C Standard says that 'int' might be 16 bits.

Last time I looked, which was admittedly a long time ago,
POSIX.1 used typedefs in essentially all interfaces. In
fact the switch to typedefs instead of basic types was in
progress back in 7th Edition UNIX days, showing that the
issue was well understood even then. If anybody is
designing APIs using "int" and assuming that that gets
them at least 32-bit width, they really don't know what
they're doing.
 
P

Paul Eggert

Last time I looked, which was admittedly a long time ago,
POSIX.1 used typedefs in essentially all interfaces.

No it doesn't. Here are some sample parts of the POSIX API that use
'int' or 'unsigned int' for integers that can exceed the 16-bit range.

accept
struct aoicb
aio_cancel
aio_suspend
alarm
atoi

And this is just the parts of the API that begin with 'a'. There are
lots more where that came from.

Exceeding the 16-bit range is not a problem with POSIX 1003.1-2001, of
course, since it requires 32-bit integers. As a trivial example, I
can do this in a POSIX 1003.1-2001 application:

alarm (atoi ("86400"))

without having to worry about 16-bit overflow.

Many of the 'int' interfaces (e.g., atoi) are inherited from the C
Standard, but most of them are POSIX-specific.

I understand that 16-bit computers are still an important
embedded-computing niche, but as far as POSIX applications go they
dropped off the face of the earth a loooong time go. I think I last
used a 16-bit UNIX computer in 1979 or perhaps 1980.
 
A

Ajoy K Thamattoor

Last time I looked, which was admittedly a long time ago,
POSIX.1 used typedefs in essentially all interfaces. In
fact the switch to typedefs instead of basic types was in
progress back in 7th Edition UNIX days, showing that the
issue was well understood even then. If anybody is
designing APIs using "int" and assuming that that gets
them at least 32-bit width, they really don't know what
they're doing.

I think they do. It means they don't care about 16-bit
processors. It is easy to say that code should be made portable
to every esoteric platform which can potentially support a
conforming implementation. In reality, there are costs involved
with such portability.

Ajoy.
 
D

Douglas A. Gwyn

Ajoy said:
I think they do. It means they don't care about 16-bit
processors. It is easy to say that code should be made portable
to every esoteric platform which can potentially support a
conforming implementation. In reality, there are costs involved
with such portability.

The thing is, we're not talking about esoteric platforms
nor any significant difficulty/cost.
 
F

Francis Glassborow

Douglas A. Gwyn said:
Last time I looked, which was admittedly a long time ago,
POSIX.1 used typedefs in essentially all interfaces. In
fact the switch to typedefs instead of basic types was in
progress back in 7th Edition UNIX days, showing that the
issue was well understood even then. If anybody is
designing APIs using "int" and assuming that that gets
them at least 32-bit width, they really don't know what
they're doing.

I realise that you have many other concerns which consume your time but
the above effectively disbars you from expressing opinions on coding
styles explicitly aimed at POSIX systems.

As C Standards 'experts' we may have opinions about how POSIX builds on
C but we should restrain ourselves from casting opprobrium on those who
know how POSIX works and are happy that that is where their code is
targeted.

This is no different from not casting aspersions on the expertise of
programmers writing for the 8051 family who use bit variables.

It is part of an experts responsibility to understand the requirements
of the job they are doing and it is not dumb or stupid to take advantage
of specific domain and target platform knowledge.
 
D

Dan Pop

In said:
Many embedded systems have C implementations with 16-bit int.
Porting code that assumes int has more bits than the standard
guarantees is a pain, and unnecessarily so.

It is the C standard itself that makes porting code to embedded
systems a pain, by not requiring enough *standard* library support
for freestanding implementations. If the code contains a standard
library function call, it is, by definition, non-portable to such
systems/implementations.

Dan
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
474,146
Messages
2,570,832
Members
47,374
Latest member
EmeliaBryc

Latest Threads

Top