D
David Mathog
In the beginning (Kernighan & Ritchie 1978) there was fprintf, and unix
write, but no fwrite. That is, no portable C method for writing binary
data, only system calls which were OS specific. At C89 fwrite/fread
were added to the C standard to allow portable binary IO to files. I
wonder though why the choice was made to extend the unix function
write() into a standard C function rather than to extend the existing
standard C function fprintf to allow binary operations?
Consider a bit of code like this (error checking and other details omitted):
int ival;
double dval;
char string[10]="not full\0\0";
FILE *fp;
fp = fopen("file.name","w");
(void) fprintf(fp,"%i%f%s",ival,dval,string);
It always seemed to me that the natural extension, if the data needed to
be written in binary, would have been either this (which would have
allowed type checking):
(void) fprintf(fp,"%bi%bf%bs",ival,dval,string);
or perhaps just this (which would not have allowed type checking):
(void) fprintf(fp,"%b%b%b",ival,dval,string);
(Clearly there are some issues in deciding whether to write for string
"not full", or the entire buffer, which could have been handled in the
%bs form using field width, for instance.)
Anyway, in the real world fwrite was chosen. For those of you who were
around for this decision, was extending fprintf considered instead of,
or in addition to fwrite? What was the deciding factor for fwrite?
I'm guessing that it was that everybody had been using write() for years
and it was thought that fwrite was a more natural extension, but that is
just a guess.
Thanks,
David Mathog
write, but no fwrite. That is, no portable C method for writing binary
data, only system calls which were OS specific. At C89 fwrite/fread
were added to the C standard to allow portable binary IO to files. I
wonder though why the choice was made to extend the unix function
write() into a standard C function rather than to extend the existing
standard C function fprintf to allow binary operations?
Consider a bit of code like this (error checking and other details omitted):
int ival;
double dval;
char string[10]="not full\0\0";
FILE *fp;
fp = fopen("file.name","w");
(void) fprintf(fp,"%i%f%s",ival,dval,string);
It always seemed to me that the natural extension, if the data needed to
be written in binary, would have been either this (which would have
allowed type checking):
(void) fprintf(fp,"%bi%bf%bs",ival,dval,string);
or perhaps just this (which would not have allowed type checking):
(void) fprintf(fp,"%b%b%b",ival,dval,string);
(Clearly there are some issues in deciding whether to write for string
"not full", or the entire buffer, which could have been handled in the
%bs form using field width, for instance.)
Anyway, in the real world fwrite was chosen. For those of you who were
around for this decision, was extending fprintf considered instead of,
or in addition to fwrite? What was the deciding factor for fwrite?
I'm guessing that it was that everybody had been using write() for years
and it was thought that fwrite was a more natural extension, but that is
just a guess.
Thanks,
David Mathog