Is C99 the final C?

C

CBFalconer

pete said:
I think you mean "sound the final death knell"

You have pounded home the final nail while watching him writhe in
his final death throes. All of which is finally fine with me.
Sedulously eschew sadistic obfuscation. :)
 
L

Lorenzo Villari

Arthur J. O'Dwyer said:
Agreed. Although besides the mixing of declarations and statements,
I can't think of any C99 features I *use* that are lacking in C90.

Hmm... in C99 I can do

char symbols [128] = { ['*'] = 1, ['/'] = 2, ['+'] = 3, ['-'] = 4, ... };

or

struct
{
char *expr;
int value;
} lexer ['z' - 'a'] = { ['b' - 'a'] = { "[a-zA-Z]", 0 }, ... };

or

struct flag
{
bool one;
bool other;
};

struct flag initFlag =
{
.one = false,
.other = true
};
(at global scope)

or

FILE *fp;
#define PRINTHIS(...) fprintf (fp, __VA_ARGS__)

or

#define varIF(...) if (__VA_ARGS__) {

or

#define VERIFY(...) { int table[] = { __VA_ARGS__, 1 }; printf("%d",table);

what's the equivalent in C89?
 
K

Kevin Bracey

In message <[email protected]>
Keith Thompson said:
One potential problem (assume 4-byte ints, normally requiring 4-byte
alignment):

_Packed struct { /* or whatever syntax you like */
char c; /* offset 0, size 1 */
int i; /* offset 1, size 4 */
} packed_obj;

You can't sensible take the address of packed_obj.i. A function that
takes an "int*" argument will likely die if you give it a misaligned
pointer (unless you want to allow _Packed as an attribute for function
arguments). The simplest approach would be to forbid taking the
address of a member of a packed structure (think of the members as fat
bit fields).

The simplest way is to also treat _Packed as a type qualifier, much like
const.

&packed_obj.i would be of type _Packed int *; it wouldn't be possible
to assign it to an int *. For CPUs which don't allow unaligned access, the
concept of a _Packed (ie unaligned) pointer is useful, probably more so than
unaligned structures, as it allows the C programmer to easily read an
unaligned value from any raw, unaligned data.

That's the way the _Packed implementation I've seen works.

The one wrinkle is that the _Packed-ness has to attach to the struct tag in
that declaration, to ensure struct type compatibility. Not an issue in your
example, but it is if the struct was named. Also all sub-structures of
_Packed structures must themselves be _Packed.

I would be in favour of standardising _Packed. Even if you didn't totally
standardise _Packed structure layout, the standardisation of the actual
syntax and type rules would be worthwhile.
 
K

Kevin Bracey

I'm curious - why do you think that? I don't know that you're wrong,
but I can't think of any reason why it would be a significant cost.

Nah, it's dead cheap. Only about 15 lines in my C++ compiler. You just need
an extra field in your "Variable" objects containing the "known" constant
value from the initialiser, if any, and you substitute that in every time the
variable's value is referenced.

It would be just as easy to do this as an optimisation for a C compiler as
well, if it weren't for the requirement to still flag up the non-constant
constraint violations. Also, tentative definitions slightly confuse things,
in that a const int may become known later.
 
A

Arthur J. O'Dwyer

Hmm... in C99 I can do

[and then at the end asks]
what's the equivalent in C89?
char symbols [128] = { ['*'] = 1, ['/'] = 2, ['+'] = 3, ['-'] = 4, ... };

This is reasonable; it's just not something I do a lot.

char symbols[128] = {0};
symbols['*'] = 1;
symbols['/'] = 2;
symbols['+'] = 3;
...
or
char ops[] = {'*', '/', '+', '-', ...};
char symbols[128] = {0};
int i;
for (i=0; i < sizeof ops; ++i)
symbols[ops] = i+1;

or

struct
{
char *expr;
int value;
} lexer ['z' - 'a'] = { ['b' - 'a'] = { "[a-zA-Z]", 0 }, ... };

No reasonable (portable) C equivalent that I can think of.
I'm not even sure what some of those expressions are *meant*
to do -- aren't you making the implicit assumption that ('b'-'a')
is equal to 1? So why not write 1? Why not write 25 instead of
('z' - 'a'), and save yourself the whole ASCII assumption?
Once you've gotten that far, it's trivial to re-write the code
in either the way I wrote above, or as a C89-style initializer
(simply drop the array indices).
or

struct flag
{
bool one;
bool other;
};

struct flag
{
int one;
int other;
};

which I tend to do, even in C++, which has had 'bool' longer than
C, I think (although I do write methods returning 'bool' when it
makes sense -- I guess the 'int' thing is just a habit at this point).
struct flag initFlag =
{
.one = false,
.other = true
};
(at global scope)

struct flag initFlag = { 0, 1 };

:) I see how the C99 way is better, yes.

FILE *fp;
#define PRINTHIS(...) fprintf (fp, __VA_ARGS__)

#include <stdio.h>
#include <stdarg.h>
FILE *fp;
void PRINTHIS(const char *s, ...)
{
va_list ap;
va_start(ap, s);
vfprintf(fp, s, ap);
va_end(ap);
}

The "C89 way" even does some typechecking on the first
parameter! :)

#define varIF(...) if (__VA_ARGS__) {

#define varIF(x) if (x) {
or
#define varIF(parenthesized) if parenthesized {
or
#undef varIF /* why use such a thing in the first place? */

#define VERIFY(...) { int table[] = { __VA_ARGS__, 1 }; printf("%d",table);

Huh? Here, there's a mismatch in the format specifier for
printf(); the variadic portion of the macro arguments can only
take on 'int' values; called with no arguments, the macro will
not even compile; the macro is not brace-balanced (missing one
closing } brace); and I have no idea what its point is.
But certainly it can't be compiled with a C89 compiler, either!

my $.02,
-Arthur
 
L

Lorenzo Villari

Arthur J. O'Dwyer said:
struct flag
{
int one;
int other;
};

which I tend to do, even in C++, which has had 'bool' longer than
C, I think (although I do write methods returning 'bool' when it
makes sense -- I guess the 'int' thing is just a habit at this point).


struct flag initFlag = { 0, 1 };

:) I see how the C99 way is better, yes.

Ok... but what if

struct problem
{
bool one;
bool other;
int someother;
char anyway[80];
float thank;
void *you;
double very_much;
};

struct flag initFlag =
{
.thank = 5.4,
.very_much = 4.5
};

and initFlag is still global...

I guess this should be something like

struct flag initFlag = { 0, 0, 0, {0}, 5.4, 0, 4.5 };

but I think C99 syntax is more clear...
#define VERIFY(...) { int table[] = { __VA_ARGS__, 1 };
printf("%d",table);

Huh? Here, there's a mismatch in the format specifier for
printf(); the variadic portion of the macro arguments can only
take on 'int' values; called with no arguments, the macro will
not even compile; the macro is not brace-balanced (missing one
closing } brace); and I have no idea what its point is.
But certainly it can't be compiled with a C89 compiler, either!

In fact the missing brace is a typo...

Thank you for your explanations ^^
 
P

Paul Hsieh

Paul said:
Sidney Cadot said:
[...] I for one would be happy if more compilers would
fully start to support C99, It will be a good day when I can actually
start to use many of the new features without having to worry about
portability too much, as is the current situation.
I don't think that day will ever come. In its totallity C99 is almost
completely worthless in real world environments. Vendors will be
smart to pick up restrict and few of the goodies in C99 and just stop
there.

Want to take a bet...?

Sure. Vendors are waiting to see what the C++ people do, because they are well
aware of the unreconcilable conflicts that have arisen. Bjarne and crew are
going to be forced to take the new stuff C99 in the bits and pieces that don't
cause any conflict or aren't otherwise stupid for other reasons. The Vendors
are going to look at this and decide that the subset of C99 that the C++ people
chose will be the least problematic solution and just go with that.
If instead, the preprocessor were a lot more functional, then you
could simply extract packed offsets from a list of declarations and
literally plug them in as offsets into a char[] and do the slow memcpy
operations yourself.

This would violate the division between preprocessor and compiler too
much (the preprocessor would have to understand quite a lot of C semantics).

No, that's not what I am proposing. I am saying that you should not use
structs at all, but you can use the contents of them as a list of comma
seperated entries. With a more beefed up preprocessor one could find the
offset of a packed char array that corresponds to the nth element of the list
as a sum of sizeof()'s and you'd be off to the races. So I am not proposing
that the preprocessor know anything more about the C language at all. I am
instead proposing that it be better at what it *does* know about -- numbers,
macros, and various C-language compatible tokens.
That doesn't seem possible. The amount of "stack" that an
implementation might use for a given function is clearly not easy to
define. Better to just leave this loose.

It's not easy to define, that's for sure. But to call into recollection
a post from six weeks ago: [...] ...This is legal C (as per the Standard),
but it overflows the stack on any implementation (which is usually a
sumptom of UB). Why is there no statement in the standard that even so much
as hints at this?

isgraph(-1) is also legal C -- *SYNTACTICALLY*. There is no end of problems
with the C programming environment. To gripe about runtime stack depth
limitations alone I think is kind of pointless. C is a language suitable for
and high encouraging of writing extremely unsound and poor code. Fixing it
would require a major overhaul of the language and library.
But this is perhaps territory that the Standard should steer clear of,
more like something a well-written and dedicated third-party library
could provide.

But a third party library can't do this portably. Its actual useful
functionality that you just can't get from the C language, and there's no way
to reliably map such functionality to the C language itself. One is forced to
know the details of the underlying platform to implement such things. Its
something that really *should* be in the language.
I'd rather see this as filling in a gaping hole.


Because that's difficult to get right (unlike a proposed binary output
form).

There are sources for snprintf available that can do it. You are asking for
this feature because you think it would be useful *FOR YOU*. I convert hex to
binary in my head without barely thinking and would rather use the screen space
for more pertinant things, so it would not be useful for me. My proposal
allows the programmer to decide what is or is not useful them.
The %x format specifier mechanism is perhaps not a good way to do this,
if only because it would only allow something like 15 extra output formats.

I'm not sure what you are saying here. You all of a sudden don't like the hex
printing format? And why is having more, user definable print formats a bad
thing?
The problem is that real string handling requires memory handling.
The other primitive types in C are flat structures that are fixed
width. You either need something like C++'s constructor/destructor
semantics or automatic garbage collection otherwise you're going to
have some trouble with memory leaking.

A very simple reference-counting implementation would suffice. [...]

This would complexify the compiler to no end. Its also hard to account for a
reference that was arrived at via something like "memcpy".
I don't think it is a silly idea to have some consideration for
worst-case performance in the standard, especially for algorithmic
functions (of which qsort and bsearch are the most prominent examples).

Perhaps you misunderstand me. The fact the C committee *DIDN'T* do this is an
abomination. STL includes some kind of sorting mechanisms which are now
guaranteed to be O(n*log(n)) because of the existence of an algorithm called
"INTROSORT" (which is really just a quicksort that aborts when it realizes its
going too slow, and switches to heapsort -- but the authors think its clever
because they do this determiniation recursively.)
Why is it any more esoteric than having a comma operator?

I didn't say was. I've never used the comma operator outside of an occasional
extra command at the end of the increment statement in a for loop in my life.
I consider comma to be esoteric as well.
I'll provide three reasons.

1) because it is something completely different

Yeah its a superset that has been embraced by the C++ community.
2) because it is quite unrelated (I don't get the 'instead')

I'm saying that you could have &&&, |||, but just don't defined what they
actually do. Require that the programmer define what they do. C doesn't have
type-specific functions, and if one were to add in operator overloading in a
consistent way, then that would mean that an operator overload would have to
accept only its defined type. For this to be useful without losing the
operators that already exist in C, the right answer is to *ADD* operators. In
fact I would suggest that one simply defined a grammar for such operators, and
allow *ALL* such operators to be definable.
3) because operator overloading is mostly a bad idea, IMHO

Well, Bjarne Stroustrup has made a recent impassioned request to *REMOVE*
features from C++. I highly doubt that operator overloading is one that has
been made or would be taken seriously. I.e., I don't think a credible
population of people who have been exposed to it would consider it a bad idea.
... I'd like to see them. &&& is a bit silly (it's fully equivalent to
"a ? b : 0") but ||| (or ?: in gcc) is actually quite useful.

But there are no end of little cheesy operators that one could add. For
example, a <> b to swap a and b, a <<< b to rotate a by b bits, @ a to find the
highest bit of a, etc., etc., etc. All of these are good, in some cases. And
I think that there would be no end to the number of useful operators that one
might like to add to a program. I think your proposal is DOA because you
cannot make a credible case as to why your operator in particular has any value
over any of number of other operators that you might like to add.

Adding operator overloading, however, would be a real extension and would in a
sense address *all* these issues.
It's more a strain on the brain to me, why there are coupled
assignment/operators for neigh all binary operators, but not for this
unary one.

Ok, but then again this is just a particular thing with you.
Now I would ask you: which existing operator would you like to overload
for, say, integers, to mean "min" and "max" ?

How about a <==> b for max and a >==< b for min? I personally don't care that
much.
...It already is available in C, given a good-enough compiler. Look at
the code gcc spits out when you do:

unsigned long a = rand();
unsigned long b = rand();

unsigned long long c = (unsigned long long)a * b;

Yes I'm sure the same trick works for chars and shorts. So how do you widen a
long long multiply?!?!? What compiler trick are you going to hope for to
capture this? What you show here is just some trivial *SMALL* multiply, that
relies on the whims of the optimizer.

PowerPC, Alpha, Itanium, UltraSPARC and AMD64 all have widening multiplies that
take two 64 bit operands and returns a 128 bit result in a pair of 64 bit
operands. They all invest a *LOT* of transistors to do this *ONE* operation.
They all *KNOW* you can't finagle any C/C++ compiler to produce the operation,
yet they still do it -- its *THAT* important (hint: SSL, and therefore *ALL* of
e-commerce, uses it.)
Many languages exists where this is possible, they are called
"assembly". There is no way that you could come up with a well-defined
semantics for this.

carry +< var = a + b;
Did you know that a PowerPC processor doesn't have a shift-right where
you can capture the carry bit in one instruction? Silly but no less true.

What has this got to do with anything? Capturing carries coming out of shifts
don't show up in any significant algorithms that I am aware of that are
significantly faster than using what we have already. The specific operations
I am citing make a *HUGE* difference and have billion dollar price tags
associated with them.

I understand the need for the C language standard to be applicable to as many
platforms as possible. But unlike some right shift detail that you are talking
about, the widening multiply hardware actually *IS* deployed everywhere.
 
S

Sidney Cadot

Paul said:
Paul said:
[...] I for one would be happy if more compilers would
fully start to support C99, It will be a good day when I can actually
start to use many of the new features without having to worry about
portability too much, as is the current situation.
I don't think that day will ever come. In its totallity C99 is almost
completely worthless in real world environments. Vendors will be
smart to pick up restrict and few of the goodies in C99 and just stop
there.

Want to take a bet...?


Sure. Vendors are waiting to see what the C++ people do, because they are well
aware of the unreconcilable conflicts that have arisen. Bjarne and crew are
going to be forced to take the new stuff C99 in the bits and pieces that don't
cause any conflict or aren't otherwise stupid for other reasons. The Vendors
are going to look at this and decide that the subset of C99 that the C++ people
chose will be the least problematic solution and just go with that.

Ok. I'll give you 10:1 odds; there will be a (near-perfect) C99 compiler
by the end of this decade.
If instead, the preprocessor were a lot more functional, then you
could simply extract packed offsets from a list of declarations and
literally plug them in as offsets into a char[] and do the slow memcpy
operations yourself.

This would violate the division between preprocessor and compiler too
much (the preprocessor would have to understand quite a lot of C semantics).


No, that's not what I am proposing. I am saying that you should not use
structs at all, but you can use the contents of them as a list of comma
seperated entries. With a more beefed up preprocessor one could find the
offset of a packed char array that corresponds to the nth element of the list
as a sum of sizeof()'s and you'd be off to the races.

Perhaps I'm missing something here, but wouldn't it be easier to use the
offsetof() macro?
> So I am not proposing
that the preprocessor know anything more about the C language at all. I am
instead proposing that it be better at what it *does* know about -- numbers,
macros, and various C-language compatible tokens.

Ok. It may make sense to extend the preprocessor.
It's not easy to define, that's for sure. But to call into recollection
a post from six weeks ago: [...] ...This is legal C (as per the Standard),
but it overflows the stack on any implementation (which is usually a
sumptom of UB). Why is there no statement in the standard that even so much
as hints at this?
isgraph(-1) is also legal C -- *SYNTACTICALLY*. There is no end of problems
with the C programming environment. To gripe about runtime stack depth
limitations alone I think is kind of pointless.

Well, I showed a perfectly legal C program that should happily run and
terminate if I am to believe the standard, yet it doesn't do that on any
architecture. Excuse me for being a bit unhappy about that.
C is a language suitable for
and high encouraging of writing extremely unsound and poor code. Fixing it
would require a major overhaul of the language and library.

That's true. I don't quite see how this relates to the preceding
statement though.
But a third party library can't do this portably.

I don't see why not?
> Its actual useful
functionality that you just can't get from the C language, and there's no way
to reliably map such functionality to the C language itself. One is forced to
know the details of the underlying platform to implement such things. Its
something that really *should* be in the language.

Well, it looks to me you're proposing to have a feature-rich heap
manager. I honestly don't see you this couldn't be implemented portably
without platform-specific knowledge. Could you elaborate?
There are sources for snprintf available that can do it. You are asking for
this feature because you think it would be useful *FOR YOU*.

Yes, and for a bunch of other people.
I convert hex to binary in my head without barely thinking and would rather
> use the screen space for more pertinant things, so it would not be
useful for me.

I can do it blindfolded, with my hands tied behind my back, at an
ambient noise level that makes most people lose bladder control. I
wouldn't use it all the time mind you, but it's just plain silly to be
able to do hex, octal, and decimal, but not binary. I want this more for
reasons of orthogonality in design than anything else.
My proposal allows the programmer to decide what is or is not useful them.

I'm all for that.

I don't think it's too bad an idea (although I have never gotten round
to trying the mechanism gcc provides for this). In any case, this kind
of thing is so much more naturally done in a OOP-supporting language
like C++ . Without being bellingerent: why not use that if you want this
kind of thing?
I'm not sure what you are saying here. You all of a sudden don't like the hex
printing format? And why is having more, user definable print formats a bad
thing?

I used "%x" as an example of a format specifier that isn't defined ('x'
being a placeholder for any letter that hasn't been taken by the
standard). The statement is that there'd be only 15 about letters left
for this kind of thing (including 'x' by the way -- it's not a hex
specifier). Sorry for the confusion, I should've been clearer.
* I think I would like to see a real string-type as a first-class
citizen in C, implemented as a native type. But this would open
up too big a can of worms, I am afraid, and a good case can be
made that this violates the principles of C too much (being a
low-level language and all).

The problem is that real string handling requires memory handling.
The other primitive types in C are flat structures that are fixed
width. You either need something like C++'s constructor/destructor
semantics or automatic garbage collection otherwise you're going to
have some trouble with memory leaking.

A very simple reference-counting implementation would suffice. [...]

This would complexify the compiler to no end. Its also hard to account for a
reference that was arrived at via something like "memcpy".

A first-class citizen string wouldn't be a pointer; neither would you
necessarily be able to get its address (although you should be able to
get the address of the characters it contains).
Perhaps you misunderstand me. The fact the C committee *DIDN'T* do this is an
abomination. STL includes some kind of sorting mechanisms which are now
guaranteed to be O(n*log(n)) because of the existence of an algorithm called
"INTROSORT" (which is really just a quicksort that aborts when it realizes its
going too slow, and switches to heapsort -- but the authors think its clever
because they do this determiniation recursively.)

Sorry about that, I thought you were sarcastic. Ok, then we agree on
this. Moving on...
I didn't say was. I've never used the comma operator outside of an occasional
extra command at the end of the increment statement in a for loop in my life.
I consider comma to be esoteric as well.

Ok, that's a valid opinion that I don't happen to share.
Yeah its a superset that has been embraced by the C++ community.

It's a superset only if the C language would have a ||| or &&& operator
in the first place. Which (much to my dismay) it doesn't.
I'm saying that you could have &&&, |||, but just don't defined what they
actually do. Require that the programmer define what they do. C doesn't have
type-specific functions, and if one were to add in operator overloading in a
consistent way, then that would mean that an operator overload would have to
accept only its defined type.

Ok, so the language should have a big bunch of operators, ready for the
taking. Incidentally, Mathematica supports this, if you want it badly.
For this to be useful without losing the
operators that already exist in C, the right answer is to *ADD* operators. In
fact I would suggest that one simply defined a grammar for such operators, and
allow *ALL* such operators to be definable.

This seems to me a bad idea for a multitude of reasons. First, it would
complicate most stages of the compiler considerably. Second, a
maintenance nightmare ensues: while the standard operators of C are
basically burnt into my soul, I'd have to get used to the Fantasy
Operator Of The Month every time I take on a new project, originally
programmed by someone else.

There's a good reason that we use things like '+' and '*' pervasively,
in many situations; they are short, and easily absorbed in many
contexts. Self-defined operator tokens (consisting, of course, of
'atomic' operators like '+', '=', '<' ...) will lead to unreadable code,
I think; perhaps something akin to a complicated 'sed' script.
Well, Bjarne Stroustrup has made a recent impassioned request to *REMOVE*
features from C++.

Do you have a reference? That's bound to be a fun read, and he probably
missed a few candidates.
> I highly doubt that operator overloading is one that has
been made or would be taken seriously. I.e., I don't think a credible
population of people who have been exposed to it would consider it a bad idea.

I can only speak for myself; I have been exposed, and think it's a bad
idea. When used very sparsely, it has it's uses. However, introducing
new user-definable operators as you propose would be folly; the only way
operator overloading works in practice is if you maintain some sort of
link to the intuitive meaning of an operator. User defined operators
lack this by definition.
But there are no end of little cheesy operators that one could add. For
example, a <> b to swap a and b, a <<< b to rotate a by b bits, @ a to find the
highest bit of a, etc., etc., etc.

"<>" would be a bad choice, since it is easy to confuse for "not equal
to". I've programmed a bit in IDL for a while, which has my dear "min"
and "max" operators.... It's a pity they are denoted "<" and ">",
leading to heaps of misery by confusion.

<<< and @ are nice though. I would be almost in favour of adding them,
were it not for the fact that this would drive C dangerously close in
the direction of APL.
All of these are good, in some cases. And
I think that there would be no end to the number of useful operators that one
might like to add to a program. I think your proposal is DOA because you
cannot make a credible case as to why your operator in particular has any value
over any of number of other operators that you might like to add.
Adding operator overloading, however, would be a real extension and would in a
sense address *all* these issues.

Again I wonder, seriously: wouldn't you be better of using C++ ?
Ok, but then again this is just a particular thing with you.

Guilty as charged.

Sure, but you're talking about something that goes a lot further than
run-off-the-mill operator overloading. I think the simple way would be
to just introduce these min and max operators and be done with it.

"min" and "max" are perhaps less important than "+" and "*", but they
are probably the most-used operations that are not available right now
as operators. If we are going to extend C with new operators, they would
be the most natural choice I think.
How about a <==> b for max and a >==< b for min? I personally don't care that
much.

Those are not existing operators, as you know. They would have to be
defined in your curious "operator definition" scheme.

I find the idea freaky, yet interesting. I think C is not the place for
this (really, it would be too easy to compete in the IOCCC) but perhaps
in another language... Just to follow your argument for a bit, what
would an "operator definition" declaration look like for, say, the "?<"
min operator in your hypothetical extended C?
Yes I'm sure the same trick works for chars and shorts. So how do you widen a
long long multiply?!?!? What compiler trick are you going to hope for to
capture this? What you show here is just some trivial *SMALL* multiply, that
relies on the whims of the optimizer.

Well, I'd show you, but it's impossible _in principle_. Given that you
are multiplying two expressions of the widest type supported by your
compiler, where would it store the result?
PowerPC, Alpha, Itanium, UltraSPARC and AMD64 all have widening multiplies that
take two 64 bit operands and returns a 128 bit result in a pair of 64 bit
operands. They all invest a *LOT* of transistors to do this *ONE* operation.
They all *KNOW* you can't finagle any C/C++ compiler to produce the operation,
yet they still do it -- its *THAT* important (hint: SSL, and therefore *ALL* of
e-commerce, uses it.)

Well, I don't know if these dozen-or-so big-number 'powermod' operations
that are needed to establish an SSL connection are such a big deal as
you make it.
carry +< var = a + b;

It looks cute, I'll give you that. Could you please provide semantics?
It may be a lot less self evident than you think.
What has this got to do with anything? Capturing carries coming out of shifts
don't show up in any significant algorithms that I am aware of
> that are significantly faster than using what we have already.

Ah, I see you've never implemented a non-table-driven CRC or a binary
greatest common divisor algorithm. They are both hard at work when you
establish an SSL connection.
The specific operations I am citing make a *HUGE* difference and have billion
> dollar price tags associated with them.

These numbers you made up from thin air, no? otherwise, I'd welcome a
reference.
I understand the need for the C language standard to be applicable to as many
platforms as possible. But unlike some right shift detail that you are talking
about, the widening multiply hardware actually *IS* deployed everywhere.

Sure is. Several good big-number libraries are available that have
processor-dependent machine code to do just this.

Best regards,

Sidney
 
S

Simon Biber

Lorenzo Villari said:
struct
{
char *expr;
int value;
} lexer ['z' - 'a'] = { ['b' - 'a'] = { "[a-zA-Z]", 0 }, ... };

This is unportable. The only characters that can be reliably
subtracted in C are the digits. '0' through '9' are guaranteed
to be consecutive. Letters need not be consecutive, and need
not even be in alphabetical order.
 
D

Dan Pop

In said:
Ok. I'll give you 10:1 odds; there will be a (near-perfect) C99 compiler
by the end of this decade.

By then, C99 is supposed to be obsoleted by C0x ;-)

Dan
 
P

P.J. Plauger

Dan Pop said:
By then, C99 is supposed to be obsoleted by C0x ;-)

Not sure how to read this, even with the emoticon. The C committee has
agreed to *reaffirm* the C Standard for the next several years, rather
than begin work on a major revision. I'd say that the odds of there
ever bing an official C0x to replace C99 are pretty small.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
 
D

Dan Pop

In said:
Not sure how to read this, even with the emoticon. The C committee has
agreed to *reaffirm* the C Standard for the next several years, rather
than begin work on a major revision. I'd say that the odds of there
ever bing an official C0x to replace C99 are pretty small.

In comp.std.c committee members keep mentioning C0x as the next C
standard.

Dan
 
P

Paul Hsieh

* support for a "packed" attribute to structs, guaranteeing that no
padding occurs.

Indeed, this is something I use on the x86 all the time. The problem
is that on platforms like UltraSparc or Alpha, this will either
inevitably lead to BUS errors, or extremely slow performing code.

If instead, the preprocessor were a lot more functional, then you
could simply extract packed offsets from a list of declarations and
literally plug them in as offsets into a char[] and do the slow memcpy
operations yourself.

Obviously an implementation of packed structures is useless if it
leads to bus errors.

There's ample precedent in other languages (Pascal and Ada at least)
for packed structures. [...] You can't sensible take the address of
packed_obj.i. A function that takes an "int*" argument will likely die if
you give it a misaligned pointer (unless you want to allow _Packed as an
attribute for function arguments). The simplest approach would be to forbid
taking the address of a member of a packed structure (think of the members
as fat bit fields). [...]

Then what would be the point of even calling it a "struct"? This is what I am
saying -- it leads to bus errors because of the rest of the language concepts
like taking the address of any value that is stored in a memory location.
Another possibility (ugly but perhaps useful) is to make the address of a
member of a packed field yield a void*.

No -- the problem is with the BUS error itself. The C language doesn't need
*EVEN MORE* ways of creating UB with otherise acceptable syntax. This is more
than just ugly is very very anti-intuitive.

The right answer is to give such pointers a special attribute, like,
"_Unaligned" (or simply reuse "_Packed".) The compiler would then enforce type
safety in the following way: A non-"_Unaligned" pointer may not accept that
value of an "_Unaligned" pointer, but the other way around is not true.
Certain functions like memcpy and memmove would then be declared with these
"_Unaligned" decorators. But programmers could go ahead and use the decorator
themselves so that unaligned accesses could be propogated arbitrarily down a
call stack in a well-defined and programmer controlled way. This would
precisely encapsulate what the programmer is trying to do without allowing the
compiler to produce unexpected BUS errors. Attempts to address an unaligned
pointer will be caught at compile time -- the perfect solution.
I don't think enums can be repaired without breaking tons of existing
code. And they are useful as currently defined for defining names for
a number of distinct integer values. If you want Pascal-like
enumeration types, you'd need a new construct -- but I think having
two distinct kinds of enumeration types would be too ugly for new
users.

How about another decorator? Like: enum _Strict ____ {...}; ? Basically the
language would not auto-convert such enums to ints at all without an explicit
cast. Once again, under programmer control.
That doesn't seem possible. The amount of "stack" that an
implementation might use for a given function is clearly not easy to
define. Better to just leave this loose.

Agreed. The limit on call depth is typically determined by the amount
of available memory, something a compiler implementer can't say much
about. You could sensibly add a call depth clause to the Translation
Limits section (C99 5.2.4.1); that would the [require the] implementation to
handle at least one program with a call depth of N, but wouldn't really
guarantee anything in general.

Well the problem with this is that then the *LINKER* would have to have
augmented to analyze the max relevant stack size of all functions in an object
and then assign the final stack according to a formula (N * maxstacksz) that
makes this work. It also kind of makes use of alloca impossible.
[...]
Ah -- the kludge request. Rather than adding format specifiers one at
a time, why not instead add in a way of being able to plug in
programmer-defined format specifiers? I think people in general would
like to use printf for printing out more than just the base types in a
collection of just a few formats defined at the whims of some 70s UNIX
hackers. Why not be able to print out your data structures, or
relevant parts of them as you see fit?

Well, you can do that with the "%s" specifier, as long as you've
defined a function that returns an image string for a value of your
type (with all the complications of functions returning dynamic
strings).

Who is going to free the memory allocated for this string? If its static, then
what happens when you try to printf two such items -- or just try to use it in
a multitasking environment in general?
Most languages that provide operator overloading restrict it to
existing operator symbols.

Yeah well most languages have real string primitives and built-in array range
checking too. Somehow I don't think what has been done in *other languages*
has any serious bearing on what should be done in C. To reiterate my proposal:
A whole *GRAMMAR* of symbols for operators could be added all of which have no
default definition, but which *can be* defined by the programmer, with
semantics similar to C's function declaration.
[...] If you want "min" and "max" for int, there
aren't any spare operator symbols you can use. If you want to allow
overloading for arbitrary symbols (which some languages do), you'll
need to decide how and whether the user can define precedence for the
new operators.

Good point, but something as simple as "lowest precendence" and increasing in
the order in which they are declared seems fine enough. Or maybe inverted --
just play with those combinations to see what makes sense in practice. If
that's not good enough, then make the precedence level relative to another
operator at the time of declaration. For example:

int _Operator ?< after + (int x, int y) { /* max */
if (x > y) return x;
return y;
}

int _Operator ?> same ?< (int x, int y) { /* min */
if (x < y) return x;
return y;
}
[...]
Right -- this would just be making C into C++. Why not instead
dramatically improve the functionality of the preprocessor so that the
macro-like cobblings we put together in place of templates are
actually good for something? I've posted elsewhere about this, so I
won't go into details.

Hmm. I'm not sure that making the preprocessor *more* powerful is
such a good idea. It's too easy to abuse as it is [...]

A *LOT* of C is easy to abuse. If you're worried about programmer you are
working with's abuse of the preprocessor then that's an issue between you and
that programmer.
If you can improve the preprocessor without making it even more
dangerous, that's great. (I don't think I've see your proposal.)

My proposal is to add preprocessor-only scoped variables:

#define $c 1

The idea is that "$c" could never show up as such a symbol in the C source
after preprocessing is done. And in cases where such a $___ variable has not
been defined you could insert an instance specific generated variable such as:

$c -> __PREPROCINST_MD5_09839fe8d98798fe8978de98799cfe01_c

so as to kind of put it into its own kind of "name-space" that is not really
"accessible" to the programmer. Where the MD5 obfuscation would come from a
source like: <filename><date,time><MD5(source)><the $varname> in an effort to
probabilistically avoid collisions across files (trust me, this is not as much
voodoo as you might think) in case some bozo turns this into a global
declaration.

The purpose is to allow for even more useful things like:

#for $c in #range(0,5)
printf ("Even: %d Odd: %d\n", 2*$c, 2*$c+1);
#endfor

/* #range(0,5) just expands to 0,1,2,3,4 and the #for loop works, kind of
python-like, just as you would expect. */

#define genStruct(name,#VARARGS) struct tag##name { #VARARGS };\
#for $c in #VARARGS
# define offsetof_##$c offsetof (tag##name, %c)
#endfor

/* Here the "\" is required to attach the #for, which then itself
has an implicit multi-line characteristic, so that the lines
up until the #endfor are sucked into the #define genStruct.
Also, #define's executed inside of a #for are repeatedly
executed for each iteration */

#define swap(type,x,y) { \
type $tmp = x; \
x = y; \
y = $tmp; \
}

/* In this case, without prior definition, $tmp is given an
obfuscated name by the time it reaches the C source code. */
 
K

Keith Thompson

Sidney Cadot said:
Paul Hsieh wrote: [...]
I used "%x" as an example of a format specifier that isn't defined ('x'
being a placeholder for any letter that hasn't been taken by the
standard). The statement is that there'd be only 15 about letters left
for this kind of thing (including 'x' by the way -- it's not a hex
specifier). Sorry for the confusion, I should've been clearer.

What do you mean when you say that "%x" is not a hex specifier?
That's either confusing or wrong.

printf("foo = %x\n", foo);

[...]
Well, I'd show you, but it's impossible _in principle_. Given that you
are multiplying two expressions of the widest type supported by your
compiler, where would it store the result?

In a struct, or in an array of two int_max_t's or uint_max_t's.
(Wasn't that a common trick in primordial C, before the introduction
of "long"?)
 
K

Keith Thompson

There's ample precedent in other languages (Pascal and Ada at
least) for packed structures. [...] You can't sensible take the
address of packed_obj.i. A function that takes an "int*" argument
will likely die if you give it a misaligned pointer (unless you
want to allow _Packed as an attribute for function arguments).
The simplest approach would be to forbid taking the address of a
member of a packed structure (think of the members as fat bit
fields). [...]

Then what would be the point of even calling it a "struct"? This is
what I am saying -- it leads to bus errors because of the rest of
the language concepts like taking the address of any value that is
stored in a memory location.

Surely it's no worse than calling a struct with bit fields a "struct".
No -- the problem is with the BUS error itself. The C language
doesn't need *EVEN MORE* ways of creating UB with otherise
acceptable syntax. This is more than just ugly is very very
anti-intuitive.

My thought was that making &struct_obj.packed_member yield a void*
would force the programmer to be careful about how he uses it. I
momentarily forgot about the implicit conversion from void* to any
object pointer type. (Which was dumb; I've even been participating in
the permanent floating "don't cast the result of malloc()" flameware.)
The right answer is to give such pointers a special attribute, like,
"_Unaligned" (or simply reuse "_Packed".)
[snip]

Yeah, that makes more sense than my idea.

BTW, I'm not necessarily arguing that packed structures would be worth
adding to the language, just thinking about how to do it right if it's
going to be done at all.

[...]
Agreed. The limit on call depth is typically determined by the
amount of available memory, something a compiler implementer can't
say much about. You could sensibly add a call depth clause to the
Translation Limits section (C99 5.2.4.1); that would the [require
the] implementation to handle at least one program with a call
depth of N, but wouldn't really guarantee anything in general.

Well the problem with this is that then the *LINKER* would have to
have augmented to analyze the max relevant stack size of all
functions in an object and then assign the final stack according to
a formula (N * maxstacksz) that makes this work. It also kind of
makes use of alloca impossible.

No, adding a call depth requirement to C99 5.2.4.1 wouldn't require
any kind of analysis. There would be no guarantee that a given depth
will be supported in all cases, merely that the implementation has to
translate and execute at least one program that hits the limit (along
with all the others). If a requirement for a call depth of at least,
say, 100 were added to 5.2.4.1, it could probably be met by all
existing implementations with no changes.

[...]
Who is going to free the memory allocated for this string? If its
static, then what happens when you try to printf two such items --
or just try to use it in a multitasking environment in general?

That's the same problem you have with any function that returns a
string. There are numerous solutions; programmers reinvent them all
the time.

If you can come up with a specification for an enhanced printf that
can produce arbitrary user-defined output for arbitrary user-defined
types, we can discuss whether it's better than "%s" with an image
function.
[...] If you want "min" and "max" for int, there
aren't any spare operator symbols you can use. If you want to allow
overloading for arbitrary symbols (which some languages do), you'll
need to decide how and whether the user can define precedence for the
new operators.

Good point, but something as simple as "lowest precendence" and
increasing in the order in which they are declared seems fine
enough. Or maybe inverted -- just play with those combinations to
see what makes sense in practice. If that's not good enough, then
make the precedence level relative to another operator at the time
of declaration. For example:
int _Operator ?< after + (int x, int y) { /* max */
if (x > y) return x;
return y;
}

int _Operator ?> same ?< (int x, int y) { /* min */
if (x < y) return x;
return y;
}

I think "increasing in the order in which they are declared" would be
very bad; you could quietly change the semantics of an expression by
reordering the declarations of the operators it uses (e.g., by
changing the order of #include directives).

For most C operators, the common rule for legible code is
"parenthesize, parenthesize, parenthesize". For user-defined
operators (if I thought they were a good idea), I'd probably advocate
not defining their precedence at all; if you want to write "x + y @ z",
you *have* to use parentheses. The next best thing might be to say
that all user-defined operators have the same precedence, perhaps just
above assignment.

People already complain that C looks like line noise; I don't think
assigning meanings to more arbitrary sequences of punctuation marks
solves anything. (And I would have thought that "?<" should be min,
and "?>" should be max.)

In the particular case of "min" and "max", I'd much rather just call
them "min" and "max". If you're going to have operator overloading,
you probably want function overloading as well. Even if you insist on
operator syntax rather than function call syntax, "a max b" is at
least as legible as "a $< b".

For most of the things that I'd want to see as operators, the existing
operator symbols are more than sufficient; for anything else, just use
identifiers.

[snip]
 
S

Sidney Cadot

What do you mean when you say that "%x" is not a hex specifier?
That's either confusing or wrong.

printf("foo = %x\n", foo);

It is wrong. A very stupid lapse on my side.
[...]
Well, I'd show you, but it's impossible _in principle_. Given that you
are multiplying two expressions of the widest type supported by your
compiler, where would it store the result?
In a struct, or in an array of two int_max_t's or uint_max_t's.
(Wasn't that a common trick in primordial C, before the introduction
of "long"?)

Yes, that would work, and could be useful. For the signed case, a struct
containing an int_max_t for the most significant part and an uint_max_t
for the least significant part might a more proper choice?

Best regards, Sidney
 
S

Sidney Cadot

Paul said:
Good point, but something as simple as "lowest precendence" and increasing in
the order in which they are declared seems fine enough. Or maybe inverted --
just play with those combinations to see what makes sense in practice. If
that's not good enough, then make the precedence level relative to another
operator at the time of declaration. For example:
int _Operator ?< after + (int x, int y) { /* max */
if (x > y) return x;
return y;
}

int _Operator ?> same ?< (int x, int y) { /* min */
if (x < y) return x;
return y;
}

That looks like a decent first stab at a proposed syntax. Some question
though:

- What would be the constraints on acceptable operator names?

- How are you going to define left, right, or lack of associativity?

- In what way will you handle the possible introduction of ambiguity in
the parser, that gets to parse the new tokens?

- What if I want a ?< to work both on int and on double types?

- does (and if so, how) does your syntax allow the introduction of unary
prefix operators (such as !), binary infix operators that may have
compile-time identifiers as a parameter (such as ->), n-ary operators
(such as the ternary a?b:c or your proposed quaternary carry/add
operator), and operators that exist both in unary and binary form (+, -)?

My gut feeling is that this would effectively force the compiler to
maintain a dynamic parser on-the-fly while scanning through the source,
which would be wildly complex. You mentioned that actual languages exist
that do this sort of thing; are they also compiled languages like C, or
are they interpreted languages of the functional variety?

Best regards,

Sidney
 
K

Keith Thompson

Sidney Cadot said:
It is wrong. A very stupid lapse on my side.

Hey, this is Usenet! You're not supposed to admit mistakes here. The
least you could do is sneakily change the subject and start being
personally abusive. :cool:}
 
P

Paul Hsieh

Sidney Cadot said:
Paul said:
Paul Hsieh wrote:
[...] I for one would be happy if more compilers would
fully start to support C99, [...]
I don't think that day will ever come. In its totallity C99 is almost
completely worthless in real world environments. Vendors will be
smart to pick up restrict and few of the goodies in C99 and just stop
there.
Want to take a bet...?

Sure. Vendors are waiting to see what the C++ people do, because they
are well aware of the unreconcilable conflicts that have arisen. Bjarne
and crew are going to be forced to take the new stuff C99 in the bits and
pieces that don't cause any conflict or aren't otherwise stupid for other
reasons. The Vendors are going to look at this and decide that the
subset of C99 that the C++ people chose will be the least problematic
solution and just go with that.

Ok. I'll give you 10:1 odds; there will be a (near-perfect) C99 compiler
by the end of this decade.

A single vendor?!?! Ooooh ... try not to set your standards too high.
Obviously, its well known that the gnu C++ people are basically converging
towards C99 compliance and are most of the way there already. That's not my
point. My point is that will Sun, Microsoft, Intel, MetroWerks, etc join the
fray so that C99 is ubiquitous to the point of obsoleting all previous C's for
all practical purposes for the majority of developers? Maybe the Comeau guy
will join the fray to serve the needs of the "perfect spec compliance" market
that he seems to be interested in.

If not, then projects that have a claim of real portability will never
embrace C99 (like LUA, or Python, or the JPEG reference implementation, for
example.) Even the average developers will forgo the C99 features for fear
that someone will try compile their stuff on an old compiler.

Look, nobody uses K&R-style function declarations anymore. The reason is
because the ANSI standard obsoleted them, and everyone picked up the ANSI
standard. That only happened because *EVERYONE* moved forward and picked up
the ANSI standard. One vendor is irrelevant.
If instead, the preprocessor were a lot more functional, then you
could simply extract packed offsets from a list of declarations and
literally plug them in as offsets into a char[] and do the slow memcpy
operations yourself.

This would violate the division between preprocessor and compiler too
much (the preprocessor would have to understand quite a lot of C semantics).

No, that's not what I am proposing. I am saying that you should not use
structs at all, but you can use the contents of them as a list of comma
seperated entries. With a more beefed up preprocessor one could find the
offset of a packed char array that corresponds to the nth element of the list
as a sum of sizeof()'s and you'd be off to the races.

Perhaps I'm missing something here, but wouldn't it be easier to use the
offsetof() macro?

It would be, but only if you have the packed structure mechanism. Other
people have posted indicating that in fact _Packed is more common that I
thought, so perhaps my suggestion is not necessary.
That's true. I don't quite see how this relates to the preceding
statement though.

I'm saying that trying to fix C's intrinsic problems shouldn't start or end
with some kind of resolution of call stack issues. Anyone who understands
machine architecture will not be surprised about call stack depth limitations.
There are far more pressing problems in the language that one would like to
fix.
I don't see why not?

Explain to me how you implement malloc() in a *multithreaded* environment
portably. You could claim that C doesn't support multithreading, but I highly
doubt your going to convince any vendor that they should shut off their
multithreading support based on this argument. By dictating its existence in
the library, it would put the responsibility of making it work right in the
hands of the vendor without affecting the C standards stance on not
acknowledging the need for multithreading.
Well, it looks to me you're proposing to have a feature-rich heap
manager. I honestly don't see you this couldn't be implemented portably
without platform-specific knowledge. Could you elaborate?

See my multithreading comment above. Also, efficient heaps are usually
written with a flat view of memory in mind. This kind of is impossible in
non-flat memory architectures (like segmented architectures.)
[...] I want this more for reasons of orthogonality in design than anything
else.

You want orthogonality in the C language? You must be joking ...
I'm all for that.

Well, I'm a programmer, and I don't care about binary output -- how does your
proposal help me decide what I think is useful to me?
I don't think it's too bad an idea (although I have never gotten round
to trying the mechanism gcc provides for this). In any case, this kind
of thing is so much more naturally done in a OOP-supporting language
like C++ . Without being bellingerent: why not use that if you want this
kind of thing?

Well, when I am programming in C++ I will use it. But I'm not going to move
all the way to using C++ just for this single purpose by itself.
I used "%x" as an example of a format specifier that isn't defined ('x'
being a placeholder for any letter that hasn't been taken by the
standard). The statement is that there'd be only 15 about letters left
for this kind of thing (including 'x' by the way -- it's not a hex
specifier). Sorry for the confusion, I should've been clearer.

Well what's wrong with %@, %*, %_, %^, etc?
* I think I would like to see a real string-type as a first-class
citizen in C, implemented as a native type. But this would open
up too big a can of worms, I am afraid, and a good case can be
made that this violates the principles of C too much (being a
low-level language and all).

The problem is that real string handling requires memory handling.
The other primitive types in C are flat structures that are fixed
width. You either need something like C++'s constructor/destructor
semantics or automatic garbage collection otherwise you're going to
have some trouble with memory leaking.

A very simple reference-counting implementation would suffice. [...]

This would complexify the compiler to no end. Its also hard to account for a
reference that was arrived at via something like "memcpy".

A first-class citizen string wouldn't be a pointer; neither would you
necessarily be able to get its address (although you should be able to
get the address of the characters it contains).

But a string has variable length. If you allow strings to be mutable, then
the actual sequence of characters has to be put into some kind of dynamic
storage somewhere. Either way, the base part of the string would in some way
have to be the storable into, say a struct. But you can copy a struct via
memcpy or however. But this then requires a count increment since there is
now an additional copy of the string. So how is memcpy supposed to know that
its contents contain a string that it needs to increase the ref count for?
Similarly, memset needs to know how to *decrease* such a ref count.

If you allow the base of the string itself to move (like those morons did in
the Safe C String Library) then a simple things like:

string *a, b;

a = (string *) malloc (sizeof (string));
*a = b;
b = b + b + b; /* triple up b, presumably relocating the base */
/* But now *a is undefined */

are just broken.

Look, the semantics of C just don't easily allow for a useful string primitive
that doesn't have impact on the memory model (i.e., leak if you aren't
careful.) Even the Better String Library (http://bstring.sf.net/) concedes
that the programmer has to dilligently call bdestroy() to clean up after
themselves, otherwise you'll just leak.
Ok, so the language should have a big bunch of operators, ready for the
taking. Incidentally, Mathematica supports this, if you want it badly.

Hey, its not me -- apparently its people like you who wants more operators.
My point is that no matter what operators get added to the C language, you'll
never satisfy everyone's appetites. People will just want more and more,
though almost nobody will want all of what could be added.

My solution solves the problem once and for all. You have all the operators
you want, with whatever semantics you want.
This seems to me a bad idea for a multitude of reasons. First, it would
complicate most stages of the compiler considerably. Second, a
maintenance nightmare ensues: while the standard operators of C are
basically burnt into my soul, I'd have to get used to the Fantasy
Operator Of The Month every time I take on a new project, originally
programmed by someone els.

Yes, but if instead of actual operator overloading you only allow redefinition
of these new operators, there will not be any of the *surprise* factor. If
you see one of these new operators, you can just view it like you view an
unfamilliar function -- you'll look up its definition obviously.
There's a good reason that we use things like '+' and '*' pervasively,
in many situations; they are short, and easily absorbed in many
contexts. Self-defined operator tokens (consisting, of course, of
'atomic' operators like '+', '=', '<' ...) will lead to unreadable code,
I think; perhaps something akin to a complicated 'sed' script.

And allowing people to define their own functions with whatever names they
like doesn't lead to unreadable code? Its just the same thing. What makes
your code readable is adherence to an agreed upon coding standard that exists
outside of what the language defines.
Do you have a reference? That's bound to be a fun read, and he probably
missed a few candidates.

It was just in the notes to some meeting Bjarne had in the last year or so to
discuss the next C++ standard. His quote was something like that: while
adding a feature for C++ can have value, removing one would have even more
value. Maybe someone who is following the C++ standardization threads can
find a reference -- I just spent a few minutes on google and couldn't find it.
I can only speak for myself; I have been exposed, and think it's a bad
idea. When used very sparsely, it has it's uses. However, introducing
new user-definable operators as you propose would be folly; the only way
operator overloading works in practice is if you maintain some sort of
link to the intuitive meaning of an operator. User defined operators
lack this by definition.

But so do user definable function names. Yet, functionally they are almost
the same.
"<>" would be a bad choice, since it is easy to confuse for "not equal
to". I've programmed a bit in IDL for a while, which has my dear "min"
and "max" operators.... It's a pity they are denoted "<" and ">",
leading to heaps of misery by confusion.

<<< and @ are nice though. I would be almost in favour of adding them,
were it not for the fact that this would drive C dangerously close in
the direction of APL.

You missed the "etc., etc., etc." part. I could keep coming up with them
until the cows come home: a! for factorial, a ^< b for "a choose b" (you want
language supposed for this because of overflow concerns of using the direct
definition) <-> a for endian swapping, $% a for the fractional part of a
floating point number, a +>> b for the average (there is another overflow
issue), etc., etc.
Again I wonder, seriously: wouldn't you be better of using C++ ?

No because I want *MORE* operators -- not just the ability to redefine the
ones I've got (and therefore lose some.)
Sure, but you're talking about something that goes a lot further than
run-off-the-mill operator overloading. I think the simple way would be
to just introduce these min and max operators and be done with it.

"min" and "max" are perhaps less important than "+" and "*", but they
are probably the most-used operations that are not available right now
as operators. If we are going to extend C with new operators, they would
be the most natural choice I think.

WATCOM C/C++ defined the macros min(a,b) and max(a,b) in some header files.
Why wouldn't the language just accept this? Is it because you want variable
length parameters? -- Well in that case, does my preprocessor extension
proposal start to look like its making more sense?
Those are not existing operators, as you know. They would have to be
defined in your curious "operator definition" scheme.

I find the idea freaky, yet interesting. I think C is not the place for
this (really, it would be too easy to compete in the IOCCC) but perhaps
in another language... Just to follow your argument for a bit, what
would an "operator definition" declaration look like for, say, the "?<"
min operator in your hypothetical extended C?

This is what I've posted elsewhere:

int _Operator ?< after + (int a, int b) {
if (a > b) return a;
return b;
}
Well, I'd show you, but it's impossible _in principle_. Given that you
are multiplying two expressions of the widest type supported by your
compiler, where would it store the result?

In two values of the widest type -- just like how just about every
microprocessor which has a multiply does it:

high *% low = a * b;
Well, I don't know if these dozen-or-so big-number 'powermod' operations
that are needed to establish an SSL connection are such a big deal as
you make it.

Its not me -- its Intel, IBM, Motorola, Sun and AMD who seem to be obsessed
with these instructions. Of course Amazon, Yahoo and Ebay and most banks are
kind of obsessed with them too, even if they don't know it.
It looks cute, I'll give you that. Could you please provide semantics?
It may be a lot less self evident than you think.

How about:

- carry is set to either 1 or 0, depending on whether or not a + b overflows
(just follow the 2s complement rules of one of a or b is negative.)

- var is set to the result of the addition; the remainder if a carry occurs.

- The whole expression (if you put the whole thing in parenthesese) returns
the result of carry.

+< would not be an operator in of itself -- the whole syntax is required.
For example: c +< v = a * b would just be a syntax error. The "cuteness" was
stolen from an idea I saw in some ML syntax. Obviously +< - would also be
useful.
Ah, I see you've never implemented a non-table-driven CRC or a binary
greatest common divisor algorithm.

You can find a binary gcd algorithm that I wrote here:

http://www.pobox.com/~qed/32bprim.c

You will notice how I don't use or care about carries coming out of a right
shift. There wouldn't be enough of a savings to matter.
[...] They are both hard at work when you establish an SSL connection.
The specific operations I am citing make a *HUGE* difference and have billion
dollar price tags associated with them.

These numbers you made up from thin air, no? otherwise, I'd welcome a
reference.

Widening multpilies cost transistor on the CPU. The hardware algorithms are
variations of your basic public school multiply algorithm -- so it takes n^2
transistors to perform the complete operation, where n is the largest bit
word that the machine accepts for the multiplier. If the multiply were not
widened they could save half of those transistors. So multiply those extra
transistors by the number of CPUs shipped with a widening multipliy (PPC,
x86s, Alphas, UltraSparcs, ... etc) and you easily end up in the billion
dollar range.
Sure is. Several good big-number libraries are available that have
processor-dependent machine code to do just this.

And that's the problem. They have to be hand written in assembly. Consider
just the SWOX Gnu multiprecision library. When the Itanium was introduced,
Intel promised that it would be great for e-commerce. The problem is that
the SWOX guys were having a hard time with IA64 assembly language (as
apparently lots of people are.) So they projected performance results for
the Itanium without having code available to do what they claim. So people
who wanted to consider using an Itanium system based on its performance for
e-commerce were stuck -- they had no code, and had to believe Intel's claims,
or SWOX's as to what the performance would be.

OTOH, if instead, the C language had exposed a carry propogating add, and a
widening multiply in the language, then it would just be up to the Intel
*compiler* people to figure out how to make sure the widening multiply was
used optimally, and the SWOX/GMP people would just do a recompile for baseline
results at least.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
474,159
Messages
2,570,879
Members
47,414
Latest member
GayleWedel

Latest Threads

Top