sandeep said:
All that #pragma would say is that between the #pragma statement and the
corresponding #end pragma, the code would be entitled to assume that
filenames were in 8.3 format.
Saying that "the code would be entitled to assume" doesn't really mean
anything.
Do you really mean that the *code* is entitled to make this assumption,
or is the *implementation* allowed to do so? And what *exactly* are the
consequences if the assumption is violated?
Usually when an implementation is allowed to assume something,
the consequence of violating the assumption is undefined behavior.
Permitting a program's behavior to be undefined if it uses a filename
like "ninechars.text" doesn't seem particularly useful. Typically
the benefit is that the assumption permits some optimization;
where is the benefit here?
For example, this code:
int convert_file(char *filename)
{
#pragma 8.3_filenames
char bak[13];
*strchr(filename, ".") = '\0';
strcpy(bak, filename);
strcat(bak, ".BAK");
#end pragma
file_copy(filename, bak);
...
}
For starters, syntactically "8.3_filenames" isn't a valid argument
to #pragma. A #pragma must be followed by an optional sequence of
preprocessing-tokens; you have ``8'', followed by ``.'', followed
by ``3_filenames'', which is not a valid preprocessing-token.
If you're going to propose a change to the language, you have to
think about that kind of detail.
You've shown us an example, but I still don't have a clue what this
#pragma is actually supposed to do. Is the behavior of convert_file
going to be different with your #pragma than without it? If so,
how? Please describe the difference in terms of how the function
behaves, or is permitted to behave, not in terms of what either
the implementation or the program may or may not assume.
How on Earth is the implementation or the program supposed to know
that either filename or bak is a file name? Or does your #pragma
affect all strings?
Incidentally, your example is broken. Once you change the second
argument to strchr from "." to '.' (please compile your code before
posting it), this:
char name[] = "hello.txt";
convert_file(name);
will invoke
file_copy("hello", "hello.BAK");
which I don't think is what you had in mind. Also, convert_file()
modifies the string to which its first parameter points, so the above
convert_file(name);
changes name from "hello.txt" to just "hello", and
convert_file("hello.txt");
invokes undefined behavior.
[...]
Look I don't think anyone has understood what I was suggesting.
I agree -- and I suggest that that includes you. We don't understand
what you're suggesting because you haven't described it clearly
enough. I speculate that you haven't described it clearly because
you don't have a clear or consistent idea of just what your proposal
really is.
The problem I was addressing is this: there is Standard C, great. But
many programs make assumptions beyond what is guaranteed by Standard C:
for example they have to do this to implement network functions or read
directories etc. At the moment the only way for a program to record the
non-Standard/non-portable assumptions it is making is in external
documentation, either comments in the source code or README files with
the code etc. Now someone on a different system compiles their code,
maybe it doesn't compile or maybe it seems to compile fine, but there are
strange runtime errors. The reason is a non-portable assumption in the
original code, and the second person didn't read the documentation or it
wasn't even documented at all.
Network functions and directory operations are generally covered by
secondary standard such as POSIX. If your program has, for example,
#include <sys/types.h>
#include <sys/socket.h>
#include <dirent.h>
then it won't compile on a system that doesn't provide those headers.
There may be better examples of what you're talking about. I suggest
you think of some.
Isn't it better if Standard C provides a common mechanism for all
programs to record what non-Standard assumptions they are making? Then
instead of successful compilation and mysterious runtime errors, instead
the compiler can say
Warning: code operates under #pragma IEEE_floating_point but this is not
supported in hardware on this platform. Switching to software emulation
for floating point - possible performance loss
Ah, so your #pragma IEEE_floating_point indicates that IEEE floating
point is supported *in hardware*, and you're assuming that if it's
not supported in hardware then it must be supported in software.
I don't think you mentioned that before.
The C standard doesn't generally concern itself with performance
issues. On some systems, *some* FP operations might be supported in
hardware, and others in software -- and some software implementations
might be faster than some hardware implementations. Other systems
might use entirely different floating-point representations and
not support IEEE at all. Others might use the IEEE floating-point
format, but not support all the semantics. And so forth. How many
kinds of #pragma would it take to cover all those possibilities,
plus the ones I haven't thought of?
I think I already mentioned the optional __STDC_IEC_559__ predefined
macro. Think about how it relates to what you're trying to do.
(I can't help you with that, since I don't know what you're trying
to do.)
A completely clear and full explanation! Of course developers may choose
not to use the #pragmas, but having a standard way of documenting non-
portable assumptions can't be a bad thing surely.
It might not be a bad thing if it could be defined properly.