Is C99 the final C?

I

Irrwahn Grausewitz

Sidney Cadot said:
A hybrid monster is heaving into view....

Integer main(input, output)
{
PrintLn('Hello World')
}
^
You forgot the period. ;-)
 
C

CBFalconer

E. Robert Tisdale said:
Michael said:
[Are there any] features that the C specification currently lacks
and which may be included in some future standardization.

The future of C is C++. The question now is,
"Will any future C++ standard adopt the features introduced in C99?"

restricted pointers,
variable-length arrays,
etc.
_____________________
/| /| | |
||__|| | Please do not |
/ O O\__ | feed the |
/ \ | Trolls |
/ \ \|_____________________|
/ _ \ \ ||
/ |\____\ \ ||
/ | | | |\____/ ||
/ \|_|_|/ | _||
/ / \ |____| ||
/ | | | --|
| | | |____ --|
* _ | |_|_|_| | \-/
*-- _--\ _ \ | ||
/ _ \\ | / `
* / \_ /- | | |
* ___ c_c_c_C/ \C_c_c_c____________

+-------------------+ .:\:\:/:/:.
| PLEASE DO NOT | :.:\:\:/:/:.:
| FEED THE TROLLS | :=.' - - '.=:
| | '=(\ 9 9 /)='
| Thank you, | ( (_) )
| Management | /`-vvv-'\
+-------------------+ / \
| | @@@ / /|,,,,,|\ \
| | @@@ /_// /^\ \\_\
@x@@x@ | | |/ WW( ( ) )WW
\||||/ | | \| __\,,\ /,,/__
\||/ | | | jgs (______Y______)
/\/\/\/\/\/\/\/\//\/\\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
==============================================================
 
S

Simon Biber

Sidney Cadot said:
A hybrid monster is heaving into view....

Integer main(input, output)
{
PrintLn('Hello World')
}

I wouldn't mind some pascal-like declaration forms, that support
cdecl syntax:

main(argc: int; argv: pointer to pointer to char): int
{
linebuf: array 100 of char;
fp: pointer to FILE;

if(argc == 2 && (fp = fopen(argv[1], "r")) != NULL)
{
fgets(linebuf, sizeof linebuf, fp);
fclose(fp);
}
return 0;
}
 
C

Chris Torek

Especially this latter form is quite useful, expressing the idea that
'a' is to be used if it has a non-zero value, else use 'b' as a
fallback. This could be threaded, as in 'a ?: b ?: c'... I'd seriously
hope the next committee would consider this. This actually /is/ useful.

It appears in other forms in other languages. For instance, the
Bourne shell has the syntax: ${var-default}, meaning "use the value
of variable foo if the variable exists, otherwise use the default".
(It nests as well: ${NEWSAUTHOR-${USER-`whoami`}}, for instance.)
In C it would have the slight flaw that "compares-equal-to-zero" is
the "unset" value, but C is pretty cavalier about zero being false. :)

[on min/max instructions and/or conditional moves -- it is not clear
what antecedent "this" referred to:]
... The only processor I've seen that has this is the Philips
Trimedia, an embedded processor optimized for multimedia streaming. It's
a VLIW processors with five parallel instruction units. Branch
prediction failure rollback is quite expensive on these beasts.

To short-circuit this thread with another, I think Paul Hseih should
write a lot of hand-coded assembly for such hardware; he might
change his mind about what C code is "efficient". :) (VLIW can
really strain one's perceptions. In particular, it often becomes
better to do work that may be discarded, than to avoid that work,
because the work takes zero time while the avoidance takes several
cycles. Itanium, with its odd "bundling", can be viewed as a
poor-man's VLIW, too.)
 
R

Richard Bos

E. Robert Tisdale said:
Michael said:
[Are there any] features that the C specification currently lacks
and which may be included in some future standardization.

The future of C is C++.

Actually, C++ is C's disreputable past.

But you know that, being part of it ;->

Richard
 
P

Paul Hsieh

Malcolm said:
Jack Klein said:
I could wish that pigs had wings so they could fly, but unless I can
come up with at least some solid engineering plans that showed
feasibility and a favorable cost-benefit ration, I would not expect
anyone to take my wish seriously.

I think there are two to be made for templates. The weak case is that
often it is handy to write a function once with several types of
arguments. For instance swap() naturally lends itself to a template
implementation. So too does mean().

Personally I don't think these cases are common enough to justify a new
language feature, and also involve some subtle problems. For instance
mean() is fine for floats and doubles, but will usually break when fed a
char array, and could easily break with a short.

The stronger case is that the C++ standard template library allows client
code to do away with dynamic memory, and almost to do away with pointers.
[...]

So, rather than implementing a whole new mechanism that takes the
language in another direction, why not try to implement templates
using the existing facilities and see where we are lacking?

The most obvious thing to use is macros -- in fact, I actually use
macros for templates all the time. But one has a difficult time
publishing a library of templates (you know something like a ...
STANDARD TEMPLATE LIBRARY) using such an idea because of the
following:

- The preprocessor does not have its own namespace for its own
symbols, so even something as simple as declaring temporaries and
using them with other variables passed as parameters is walking a
tight rope since the author cannot guarantee that its in a
different name space, and therefore might collide unintentionally.
For example:

#define swap(type,v1,v2) do { \
type tmp; \
tmp = v1; \
v1 = v2; \
v2 = tmp; \
} while (0)

actually doesn't work for something like swap(int,tmp,tmp1); So
using something like $tmp instead would be better -- the idea
being that "$" would be a valid symbol prefix only in the
preprocessor context and be the syntax error you would expect it
to be in ordinary C code.

- There is no type checking of parameters that can be performed at
compile time. Implementing compile time mechanisms as simple as:

assertTypeEqual(a, b, ...)
- Issues a compile time error if all the parameters are not
exactly equal in type.
assertTypeOneOf(a, b, ...)
- Issues error if a is not the same as at least one of the types
of the rest of the parameters.
assertTypeCoercable(a, b, ...)
- Issues error any pair of the set of parameters are not
coercable with each other.

would solve this problem.

- Once cannot perform computations or useful control flow at
preprocessor time. Compare with the Microsoft Assembler's
preprocessor for example. Such features would

#forc $c in "ABCDEF..."
putchar($c)
#endforc

would result in putchar('A'); putchar ('B'); ... etc.

#for $c in 0,1,2,3,4,5
# define $d #eval($c * 2)
printf ("%2d", $c);
#endfor

would result in printf("%2d", 0); print ("%2d", 2); ... etc.

#range(0,5)

would expand as 0,1,2,3,4 and so on.

Such ideas could also take away a lot of the ideas like "lambda
expansion" in languages like Lisp/Scheme.
 
P

Paul Hsieh

E. Robert Tisdale said:
Michael said:
[Are there any] features that the C specification currently lacks
and which may be included in some future standardization.

The future of C is C++. The question now is,
"Will any future C++ standard adopt the features introduced in C99?"

restricted pointers,
variable-length arrays,
etc.

Well actually they probably will -- and it is in fact *that* which
will spell the final death nail to C99. Any good feature of C99 is
likely to be adopted by the C++ committee while every misfeature (like
defined complex numbers) will simply be ignored. Compiler vendors
(who, of course, make C++ compilers as well) that have passed up C99
comformance because of what pathetically little it did to try to move
forward the cause of the C language, will more likely adopt the C++
standard and just leave the remaining C99 nonsense in the trash bin
where it belongs.
 
R

Richard Bos

Well actually they probably will -- and it is in fact *that* which
will spell the final death nail to C99. Any good feature of C99 is
likely to be adopted by the C++ committee

Can't be. Not needing to cast void pointers left, right and everywhere
is a very good feature of C, which will always be missing from C++.
Ditto most other features which make C a sleek, useful language and C++
a humungous monstrum.

Richard
 
D

Dan Pop

In said:
Right-o. How does one become a committee member?

As I understand it, it's a matter of having enough money and free time
on your hands.
... My idea was to convice one reader to convince one committee member
that it's worth promoting it.

At least about the stack-fault thing. That really ought to be fixed next
time 'round.

Spend more time on comp.std.c and you won't be so optimistic. There are
huge loopholes in the current standard, inherited from the old standard
and no one wants to do anything to fix them. "It's not a problem in
practice" is the usual lame excuse.

Dan
 
D

Dan Pop

^^^^^^^^^^^^^^^^
"We" don't, but maybe you do. Such implementations are among the
most important users of C today.

Yes, we do. All the time we discuss about the main function or include
standard library functions in our solutions we implicitly ignore
freestanding implementations, where main() need not have any special
semantics and where no standard library functions may be available.

We *have* to ignore freestanding implementations, because they're
too loosely covered by the standard. If you disagree, please post
a C program (it doesn't need to do anything at all) that is portable
to *any* freestanding implementation.

And don't forget to engage your brain next time you want to object to
one of my posts.

Dan
 
J

James Kuyper

Mark McIntyre said:
*shrug*. I understand your point, but when in implementation-dependent
territory, its quite immaterial what ISO requires, its never going to
safely port anyway so abandon all hope, ye who enter here....

As a practical matter, implementation-dependent is not all-or-nothing.
There are degrees of implmentation-dependence; some constructs impede
portability more than others (and that includes many strictly
conforming constructs, since not all implementation are fully
conforming). Relying on 'int' being exactly 32 bits restricts the
portability of your code to a certain set of implementations. Relying
upon 'long' being exactly 32 bits restricts the portability of your
code to a different, overlapping set of implementations. It's
legitimate to worry about which of those two sets is bigger.

It's better to write code that checks for INT32_MAX and uses int32_t,
relying a supplemental "inttypes.h" if necessary until you have access
to a true C99 implementation. However, it's not completely wrong to
use the other approach.
 
D

Dan Pop

In said:
It's better to write code that checks for INT32_MAX and uses int32_t, ^^^^^^^^^^^
relying a supplemental "inttypes.h" if necessary until you have access
to a true C99 implementation.

That's open to debate. Hiding the real type behind a typedef has its own
costs in terms of readability and maintainabilty and it's more bug prone
than using a well defined type. It's trivial to check the assumption at
compile time, so there is no risk of generating broken code.
However, it's not completely wrong to use the other approach.

This is true, regardless of which is the other approach ;-)

Dan
 
G

goose

Sidney Cadot said:
goose said:
* triple-&& and triple-|| operators: &&& and ||| with
semantics like the 'and' and 'or' operators in python:

a &&& b ---> if (a) then b else a
a ||| b ---> if (a) then a else b

[snip]

result = a? b: 0; /* &&& */

ITYM a ? b : a


surely its the same thing ?

Eg.

a() ? b() : a()

is not equivalent to

a() ? b() : 0

if a() has side-effects.

no, but if a and b are not macros which expand to a() and b(), then

a ? b : 0

is identical to

a ? b : a

<nit>
the example code above did not actually state whether or not
macros were used, or if there were any side-effects.

it *did* however, use a and b, and not a() and b().

hand
goose,
not busy today at all.
 
P

Paul Hsieh

Sidney Cadot said:
I think C99 has come a long way to fix the most obvious problems in
C89 (and its predecessors).

It has? I can't think of a single feature in C99 that would come as
anything relevant in any code I have ever written or will ever write
in the C with the exception of "restrict" and //-style comments.
There are a number of features in C99 that I will steer away from if
this issue ever comes up.
[...] I for one would be happy if more compilers would
fully start to support C99, It will be a good day when I can actually
start to use many of the new features without having to worry about
portability too much, as is the current situation.

I don't think that day will ever come. In its totallity C99 is almost
completely worthless in real world environments. Vendors will be
smart to pick up restrict and few of the goodies in C99 and just stop
there.
There's always some things that could be improved of course. My
personal wish-list would include (in no particular order):

* mandatory support for fixed-width integers (in C99, this is
optional).

Hmm ... no, if you can determine that a platform does not suppose such
a fixed-width type at compile time, that's probably good enough.
* support for a "packed" attribute to structs, guaranteeing that no
padding occurs.

Indeed, this is something I use on the x86 all the time. The problem
is that on platforms like UltraSparc or Alpha, this will either
inevitably lead to BUS errors, or extremely slow performing code.

If instead, the preprocessor were a lot more functional, then you
could simply extract packed offsets from a list of declarations and
literally plug them in as offsets into a char[] and do the slow memcpy
operations yourself.
* upgraded status of enum types (they are currently quite
interchangeable with ints); deprecation of implicit casts from
int to enum (perhaps supported by a mandatory compiler warning).

I agree. Enums, as far as I can tell, are almost useless from a
compiler assisted code integrity point of view because of the
automatic coercion between ints and enums. Its almost not worth the
bothering to ever using an enum for any reason because of it.
* a clear statement concerning the minimal level of active function
calls invocations that an implementation needs to support.
Currently, recursive programs will stackfault at a certain point,
and this situation is not handled satisfactorily in the standard
(it is not adressed at all, that is), as far as I can tell.

That doesn't seem possible. The amount of "stack" that an
implementation might use for a given function is clearly not easy to
define. Better to just leave this loose.
* a library function that allows the retrieval of the size of a memory
block previously allocated using "malloc"/"calloc"/"realloc" and
friends.

There's a lot more that you can do as well. Such as a tryexpand()
function which works like realloc except that it performs no action
except returning with some sort of error status if the block cannot be
resized without moving its base pointer. Further, one would like to
be able to manage *multiple* heaps, and have a freeall() function --
it would make the problem of memory leaks much more manageable for
many applications. It would almost make some cases enormously faster.
* a #define'd constant in stdio.h that gives the maximal number of
characters that a "%p" format specifier can emit. Likewise, for
other format specifiers such as "%d" and the like.

* a printf format specifier for printing numbers in base-2.

Ah -- the kludge request. Rather than adding format specifiers one at
a time, why not instead add in a way of being able to plug in
programmer-defined format specifiers? I think people in general would
like to use printf for printing out more than just the base types in a
collection of just a few formats defined at the whims of some 70s UNIX
hackers. Why not be able to print out your data structures, or
relevant parts of them as you see fit?
* I think I would like to see a real string-type as a first-class
citizen in C, implemented as a native type. But this would open
up too big a can of worms, I am afraid, and a good case can be
made that this violates the principles of C too much (being a
low-level language and all).

The problem is that real string handling requires memory handling.
The other primitive types in C are flat structures that are fixed
width. You either need something like C++'s constructor/destructor
semantics or automatic garbage collection otherwise you're going to
have some trouble with memory leaking.

With the restrictions of the C language, I think you are going to find
it hard to have even a language implemented primitive that takes you
anywhere beyond what I've done with the better string library, for
example (http://bstring.sf.net). But even with bstrlib, you need to
explicitely call bdestroy to clean up your bstrings.

I'd be all for adding bstrlib to the C standard, but I'm not sure its
necessary. Its totally portable and freely downloadable, without much
prospect for compiler implementors to improve upon it with any native
implementations, so it might just not matter.
* Normative statements on the upper-bound worst-case asymptotic
behavior of things like qsort() and bsearch() would be nice.

Yeah, it would be nice to catch up to where the C++ people have gone
some years ago.
O(n*log(n)) for number-of-comparisons would be fine for qsort,
although I believe that would actually preclude a qsort()
implementation by means of the quicksort algorithm :)

Anything that precludes the implementation of an actual quicksort
algorithm is a good thing. Saying Quicksort is O(n*log(n)) most of
the time is like saying Michael Jackson does not molest most of the
children in the US.
* mandatory constants for many things that currently need to
be tested by autoconf and similar tools, e.g. endianness.

A good idea.
* deprecate trigraphs. Let's end the madness once and for all.

Let's not and say we did.
* a reliable and easy way to walk over all integers in a certain
interval, including the MAX value for that type, by means of a
for loop; eg., currently

for(unsigned i=0;i<=UINT_MAX;i++) ...

doesn't work as intended.

Hmmm ... its not like you can't construct a loop to do this correctly,
so I'm not sure you need a language extension just for this. I think
this is too marginal.
* a "reverse comma" type expression, for example denoted by
a reverse apastrophe, where the leftmost value is the value
of the entire expression, but the right-hand side is also
guaranteed to be executed.

This seems too esoteric.
* triple-&& and triple-|| operators: &&& and ||| with semantics
like the 'and' and 'or' operators in python:

a &&& b ---> if (a) then b else a
a ||| b ---> if (a) then a else b

(I think this is brilliant, and actually useful sometimes).

Hmmm ... why not instead have ordinary operator overloading? While
this is sometimes a useful shorthand, I am sure that different
applications have different list cutesy compactions that would be
worth while instead of the one above.
* a way to "bitwise invert" a variable without actually
assigning, complementing "&=", "|=", and friends.

Is a ~= a really that much of a burden to type?
* 'min' and 'max' operators (following gcc: ?< and ?>)

As I mentioned above, you might as well have operator overloading instead.
* a div and matching mod operator that round to -infinity,
to complement the current less useful semantics of rounding
towards zero.

Well ... but this is the very least of the kinds of arithmetic operator
extensions that one would want. A widening multiply operation is
almost *imperative*. It always floors me that other languages are not
picking this up. Nearly every modern microprocessor in existence has
a widening multiply operation -- because the CPU manufacturer *KNOW*
its necessary. And yet its not accessible from any language. Probably
because most languages have been written on top of C or C++. And what
about a simple carry capturing addition?
Personally, I don't think it would be a good idea to have templates
in C, not even simple ones. This is bound to have quite complicated
semantics that I would not like to internalize.

Right -- this would just be making C into C++. Why not instead
dramatically improve the functionality of the preprocessor so that the
macro-like cobblings we put together in place of templates are
actually good for something? I've posted elsewhere about this, so I
won't go into details.
 
S

Sidney Cadot

As I understand it, it's a matter of having enough money and free time
on your hands.

Ah well, won't be joining anytime soon then.
Spend more time on comp.std.c and you won't be so optimistic. There are
huge loopholes in the current standard, inherited from the old standard
and no one wants to do anything to fix them. "It's not a problem in
practice" is the usual lame excuse.

I never understood why people get so worked up about practice. It's
clearly an overrated part of reality.

Best regards,

Sidney
 
S

Sidney Cadot

goose said:
no, but if a and b are not macros which expand to a() and b(), then

a ? b : 0

is identical to

a ? b : a

<nit>
the example code above did not actually state whether or not
macros were used, or if there were any side-effects.

it *did* however, use a and b, and not a() and b().

Well, I used a and b as stand-ins for "any two expressions".


By the way, got any funny looks talking to people today?

(You forgot the </nit>)

Best regards,

Sidney
 
K

Keith Thompson

* support for a "packed" attribute to structs, guaranteeing that no
padding occurs.

Indeed, this is something I use on the x86 all the time. The problem
is that on platforms like UltraSparc or Alpha, this will either
inevitably lead to BUS errors, or extremely slow performing code.

If instead, the preprocessor were a lot more functional, then you
could simply extract packed offsets from a list of declarations and
literally plug them in as offsets into a char[] and do the slow memcpy
operations yourself.

Obviously an implementation of packed structures is useless if it
leads to bus errors.

There's ample precedent in other languages (Pascal and Ada at least)
for packed structures. Typically the members are aligned on byte
boundaries rather than on the most efficient alignment boundaries.
The generated code just has to deal with any misalignment; this
shouldn't be all that difficult. (In the worst case, the compiler can
just generate calls to memcpy().) Users need to be aware that they're
trading size for performance.

One potential problem (assume 4-byte ints, normally requiring 4-byte
alignment):

_Packed struct { /* or whatever syntax you like */
char c; /* offset 0, size 1 */
int i; /* offset 1, size 4 */
} packed_obj;

You can't sensible take the address of packed_obj.i. A function that
takes an "int*" argument will likely die if you give it a misaligned
pointer (unless you want to allow _Packed as an attribute for function
arguments). The simplest approach would be to forbid taking the
address of a member of a packed structure (think of the members as fat
bit fields). Another possibility (ugly but perhaps useful) is to make
the address of a member of a packed field yield a void*.
I agree. Enums, as far as I can tell, are almost useless from a
compiler assisted code integrity point of view because of the
automatic coercion between ints and enums. Its almost not worth the
bothering to ever using an enum for any reason because of it.

I don't think enums can be repaired without breaking tons of existing
code. And they are useful as currently defined for defining names for
a number of distinct integer values. If you want Pascal-like
enumeration types, you'd need a new construct -- but I think having
two distinct kinds of enumeration types would be too ugly for new
users.
That doesn't seem possible. The amount of "stack" that an
implementation might use for a given function is clearly not easy to
define. Better to just leave this loose.

Agreed. The limit on call depth is typically determined by the amount
of available memory, something a compiler implementer can't say much
about. You could sensibly add a call depth clause to the Translation
Limits section (C99 5.2.4.1); that would the implementation to handle
at least one program with a call depth of N, but wouldn't really
guarantee anything in general.

[...]
Ah -- the kludge request. Rather than adding format specifiers one at
a time, why not instead add in a way of being able to plug in
programmer-defined format specifiers? I think people in general would
like to use printf for printing out more than just the base types in a
collection of just a few formats defined at the whims of some 70s UNIX
hackers. Why not be able to print out your data structures, or
relevant parts of them as you see fit?

Well, you can do that with the "%s" specifier, as long as you've
defined a function that returns an image string for a value of your
type (with all the complications of functions returning dynamic
strings).

I think that base-2 literals (0b11001001) and a corresponding printf
format specifier would be sensible additions.

[...]
This seems too esoteric.

And you'd need a trigraph (and probably a digraph) for the reverse
apostrophe character.

If you really need such a thing, you can fake it:

( tmp = LHS, RHS, tmp )
Hmmm ... why not instead have ordinary operator overloading? While
this is sometimes a useful shorthand, I am sure that different
applications have different list cutesy compactions that would be
worth while instead of the one above.


Is a ~= a really that much of a burden to type?

I think you mean "a = ~a".
As I mentioned above, you might as well have operator overloading instead.

Most languages that provide operator overloading restrict it to
existing operator symbols. If you want "min" and "max" for int, there
aren't any spare operator symbols you can use. If you want to allow
overloading for arbitrary symbols (which some languages do), you'll
need to decide how and whether the user can define precedence for the
new operators. And of course it would provide rich fodder for the
IOCCC.

[...]
Right -- this would just be making C into C++. Why not instead
dramatically improve the functionality of the preprocessor so that the
macro-like cobblings we put together in place of templates are
actually good for something? I've posted elsewhere about this, so I
won't go into details.

Hmm. I'm not sure that making the preprocessor *more* powerful is
such a good idea. It's too easy to abuse as it is:

#include <stdio.h>
#define SIX 1+5
#define NINE 8+1
int main(void)
{
printf("%d * %d = %d\n", SIX, NINE, SIX * NINE);
return 0;
}

If you can improve the preprocessor without making it even more
dangerous, that's great. (I don't think I've see your proposal.)
 
S

Sidney Cadot

It has? I can't think of a single feature in C99 that would come as
anything relevant in any code I have ever written or will ever write
in the C with the exception of "restrict" and //-style comments.

For programming style, I think loop-scoped variable declarations ae a
big win. Then there is variable sized array, and complex numbers... I'd
really use all this (and more) quite extensively in day-to-day work.
[...] I for one would be happy if more compilers would
fully start to support C99, It will be a good day when I can actually
start to use many of the new features without having to worry about
portability too much, as is the current situation.
I don't think that day will ever come. In its totallity C99 is almost
completely worthless in real world environments. Vendors will be
smart to pick up restrict and few of the goodies in C99 and just stop
there.

Want to take a bet...?
Indeed, this is something I use on the x86 all the time. The problem
is that on platforms like UltraSparc or Alpha, this will either
inevitably lead to BUS errors, or extremely slow performing code.

Preventing the former is the compiler's job; as for the latter, the
alternative is to do struct unpacking/unpacking by hand. Did that, and
didn't like it for one bit. And of course it's slow, but I need the
semantics.
If instead, the preprocessor were a lot more functional, then you
could simply extract packed offsets from a list of declarations and
literally plug them in as offsets into a char[] and do the slow memcpy
operations yourself.

This would violate the division between preprocessor and compiler too
much (the preprocessor would have to understand quite a lot of C semantics).
I agree. Enums, as far as I can tell, are almost useless from a
compiler assisted code integrity point of view because of the
automatic coercion between ints and enums. Its almost not worth the
bothering to ever using an enum for any reason because of it.
Yes.
That doesn't seem possible. The amount of "stack" that an
implementation might use for a given function is clearly not easy to
define. Better to just leave this loose.

It's not easy to define, that's for sure. But to call into recollection
a post from six weeks ago:

#include <stdio.h>

/* returns n! modulo 2^(number of bits in an unsigned long) */
unsigned long f(unsigned long n)
{
return (n==0) ? 1 : f(n-1)*n;
}

int main(void)
{
unsigned long z;
for(z=1;z!=0;z*=2)
{
printf("%lu %lu\n", z, f(z));
fflush(stdout);
}
return 0;
}

....This is legal C (as per the Standard), but it overflows the stack on
any implementation (which is usually a sumptom of UB). Why is there no
statement in the standard that even so much as hints at this?
There's a lot more that you can do as well. Such as a tryexpand()
function which works like realloc except that it performs no action
except returning with some sort of error status if the block cannot be
resized without moving its base pointer. Further, one would like to
be able to manage *multiple* heaps, and have a freeall() function --
it would make the problem of memory leaks much more manageable for
many applications. It would almost make some cases enormously faster.

But this is perhaps territory that the Standard should steer clear of,
more like something a well-written and dedicated third-party library
could provide.
Ah -- the kludge request.

I'd rather see this as filling in a gaping hole.
> Rather than adding format specifiers one at
a time, why not instead add in a way of being able to plug in
programmer-defined format specifiers?

Because that's difficult to get right (unlike a proposed binary output
form).
> I think people in general would
like to use printf for printing out more than just the base types in a
collection of just a few formats defined at the whims of some 70s UNIX
hackers. Why not be able to print out your data structures, or
relevant parts of them as you see fit?

The %x format specifier mechanism is perhaps not a good way to do this,
if only because it would only allow something like 15 extra output formats.
The problem is that real string handling requires memory handling.
The other primitive types in C are flat structures that are fixed
width. You either need something like C++'s constructor/destructor
semantics or automatic garbage collection otherwise you're going to
have some trouble with memory leaking.

A very simple reference-counting implementation would suffice. But yes,
it would not rhyme well with the rest of C.
With the restrictions of the C language, I think you are going to find
it hard to have even a language implemented primitive that takes you
anywhere beyond what I've done with the better string library, for
example (http://bstring.sf.net). But even with bstrlib, you need to
explicitely call bdestroy to clean up your bstrings.

I'd be all for adding bstrlib to the C standard, but I'm not sure its
necessary. Its totally portable and freely downloadable, without much
prospect for compiler implementors to improve upon it with any native
implementations, so it might just not matter.

Yeah, it would be nice to catch up to where the C++ people have gone
some years ago.

I don't think it is a silly idea to have some consideration for
worst-case performance in the standard, especially for algorithmic
functions (of which qsort and bsearch are the most prominent examples).
Anything that precludes the implementation of an actual quicksort
algorithm is a good thing. Saying Quicksort is O(n*log(n)) most of
the time is like saying Michael Jackson does not molest most of the
children in the US.

This seems too esoteric.

Why is it any more esoteric than having a comma operator?
Hmmm ... why not instead have ordinary operator overloading?

I'll provide three reasons.

1) because it is something completely different
2) because it is quite unrelated (I don't get the 'instead')
3) because operator overloading is mostly a bad idea, IMHO
While
this is sometimes a useful shorthand, I am sure that different
applications have different list cutesy compactions that would be
worth while instead of the one above.

.... I'd like to see them. &&& is a bit silly (it's fully equivalent to
"a ? b : 0") but ||| (or ?: in gcc) is actually quite useful.
Is a ~= a really that much of a burden to type?

It's more a strain on the brain to me, why there are coupled
assignment/operators for neigh all binary operators, but not for this
unary one.
As I mentioned above, you might as well have operator overloading instead.

Now I would ask you: which existing operator would you like to overload
for, say, integers, to mean "min" and "max" ?
Well ... but this is the very least of the kinds of arithmetic operator
extensions that one would want. A widening multiply operation is
almost *imperative*. It always floors me that other languages are not
picking this up. Nearly every modern microprocessor in existence has
a widening multiply operation -- because the CPU manufacturer *KNOW*
its necessary. And yet its not accessible from any language.

....It already is available in C, given a good-enough compiler. Look at
the code gcc spits out when you do:

unsigned long a = rand();
unsigned long b = rand();

unsigned long long c = (unsigned long long)a * b;
Probably because most languages have been written on top of C or C++.
> And what about a simple carry capturing addition?

Many languages exists where this is possible, they are called
"assembly". There is no way that you could come up with a well-defined
semantics for this.

Did you know that a PowerPC processor doesn't have a shift-right where
you can capture the carry bit in one instruction? Silly but no less true.
Right -- this would just be making C into C++. Why not instead
dramatically improve the functionality of the preprocessor so that the
macro-like cobblings we put together in place of templates are
actually good for something? I've posted elsewhere about this, so I
won't go into details.

This would intertwine the preprocessor and the compiler; the
preprocessor would have to understand a great deal more about C
semantics than in currently does (almost nothing).

Best regards,

Sidney
 
C

CBFalconer

Paul said:
.... snip ...


I agree. Enums, as far as I can tell, are almost useless from a
compiler assisted code integrity point of view because of the
automatic coercion between ints and enums. Its almost not worth the
bothering to ever using an enum for any reason because of it.

On the contrary they are extremely useful in defining a series of
constants that must follow each other, and yet allow easy
revision. Compare:

#define one 1
#define two (one + 1)
#define three (two + 1)
....
#define last (something + 1)

with

enum (one = 1, two, three, .... last};

and compare the effort (and potential error) of injecting
twoandahalf in each.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
474,159
Messages
2,570,879
Members
47,414
Latest member
GayleWedel

Latest Threads

Top