Complicated macro...

N

No Such Luck

I had a situation in a C program, which contained hundreds of calls to
"fprintf (stdout, ...", where I needed to flush stdout immediately
after any stream was sent there. More specifically, I needed to add the
command "fflush(stdout);" after every command that started with:

fprintf(stdout
fprintf (stdout
fprintf( stdout
fprintf ( stdout
fprintf ( stdout
etc.

I wound up doing this manually (took about 30 minutes), but I later
thought that I could have defined a macro to take care of this
substitution. However, I couldn't even begin to think of the definition
needed to handle every variable length fprintf call.

Any ideas?

Thanks.
 
J

James Hess

No said:
I had a situation in a C program, which contained hundreds of calls to
"fprintf (stdout, ...", where I needed to flush stdout immediately
after any stream was sent there. More specifically, I needed to add the
command "fflush(stdout);" after every command that started with:

Did you consider using the setvbuf() function from the
C standard library to just turn off buffering of stdout?

http://www.opengroup.org/onlinepubs/007908799/xsh/setvbuf.html

Usually stdout goes out after every line or when input is next
requested and there's little need to flush..

But if you just flush every time, it seems like you might as well
just have the buffering turned off

-J. Hess
 
N

No Such Luck

Did you consider using the setvbuf() function from the
C standard library to just turn off buffering of stdout?

http://www.opengroup.org/onlinepubs/007908799/xsh/setvbuf.html

I did not know about this, and it seems like it would have been an
acceptable solution. Just out of curiousity, why is stdout buffering
the default behavior? I can't think of that many reasons why buffering
of stdout would needed anyway.
Usually stdout goes out after every line or when input is next
requested and there's little need to flush..

I agree. And in command line mode, this is the behavior seen. However,
then this program is called as a child process, and a pipe used to
capture the stdout of the process, there is a considerable delay.
Flushing is necessary, in this case.
But if you just flush every time, it seems like you might as well
just have the buffering turned off

I agree. I'm still interested in a possible macro definition solution,
as well.

Thanks!
 
J

James Hess

No said:
I had a situation in a C program, which contained hundreds of calls to
"fprintf (stdout, ...", where I needed to flush stdout immediately
after any stream was sent there. More specifically, I needed to add the
command "fflush(stdout);" after every command that started with:

fprintf(stdout

Are you familiar with variadic macros?

In C99 you could do something like

#define print_it(file_ptr, format, ...) do \
if (fprintf(file_ptr, format, ##__VA_ARGS__ ) == 0)
fflush(file_ptr);
/* else... handle the error if you like */
while(0)
 
M

Michael Mair

You are aware that this an unnecessary obfuscation of
"printf("?
Are you familiar with variadic macros?

In C99 you could do something like

#define print_it(file_ptr, format, ...) do \
if (fprintf(file_ptr, format, ##__VA_ARGS__ ) == 0)
fflush(file_ptr);
/* else... handle the error if you like */
while(0)

This is not valid C.
__VA_ARGS__ is never optional, so just use "..." instead of
"format, ..." and "__VA_ARGS__" instead of "format, ##_VA_ARGS__.
You probably are referring to a compiler extension of a certain
"free" compiler.
Apart from that, fprintf() return EOF on error and usually the
number of characters transmitted.
Minor nit: Keep to the usual convention with upper case macro
names

#include <stdio.h>

#define PRINT_TO(file_ptr, ...) \
do { \
if (fprintf(file_ptr, __VA_ARGS__ ) != 0) \
fflush(file_ptr); \
/* else... handle the error if you like */ \
} while(0)


Cheers
Michael
 
M

Michael Mair

No said:
I did not know about this, and it seems like it would have been an
acceptable solution. Just out of curiousity, why is stdout buffering
the default behavior? I can't think of that many reasons why buffering
of stdout would needed anyway.

Because output is usually _very_ slow compared to the execution
time of the rest of your code. If you switch off buffering
(or are fflush()ing at every output), you usually have to wait
longer as you have additional overhead (initiating output etc.).

Try adding _much_ output to something computationally expensive.
I agree. And in command line mode, this is the behavior seen. However,
then this program is called as a child process, and a pipe used to
capture the stdout of the process, there is a considerable delay.
Flushing is necessary, in this case.

This is OT here, so I will not say much more than "it depends" :)
I agree. I'm still interested in a possible macro definition solution,
as well.

Under C99,

#define PRINT(...) \
do { \
printf(__VA_ARGS__); \
fflush(stdout);
} while (0)

(untested)

Under C89, I would either switch off the buffering or write a
wrapper function

int flushprint(const char *format, ...)
{
va_list args;
int ret;

va_start(args, format);
ret = vprintf(format, args);
va_end(args);
if (fflush(stdout))
ret = EOF;

return ret;
}

(untested; in addition to <stdio.h>, you also need <stdarg.h>)


Cheers
Michael
 
L

Lawrence Kirby

I did not know about this, and it seems like it would have been an
acceptable solution. Just out of curiousity, why is stdout buffering
the default behavior? I can't think of that many reasons why buffering
of stdout would needed anyway.

Buffering can greatly improve the performance of the application. As such
you generally want it on much more often than not.
I agree. And in command line mode, this is the behavior seen. However,
then this program is called as a child process, and a pipe used to
capture the stdout of the process, there is a considerable delay.
Flushing is necessary, in this case.

What delay?

Even in situations like this it usually isn't necessary to flush after
every output operation. If for example you have 2 output operations one
after the other in the code you typically don't need to flush after the
first one. Generalising this you only need to flush output when the
program might be blocked or busy doing other things for an extended
period. In that case you can flush output before you read input or do
anything else that might block, or perform a significant calculation. In
many cases you can simplify this to flushing output before reading input,
often at a single point in the code.

Lawrence
 
G

Gordon Burditt

I agree. And in command line mode, this is the behavior seen. However,
What delay?

If you are doing something time-consuming, it is common to output
something to indicate that the program is alive and actually doing
something. For example, it might output "Loading data\n" followed
by the message "Loaded %d records\n" for every thousand records it
loads. There are also programs that output a single period every
so often, again to indicate progress, or output a percentage and/or
estimated time to completion followed by a carriage return (so, on
many systems, it looks like the percentage is being periodically
updated). Or the program might be logging error messages related
to stuff it collects in real time. If you invoke the program
directly, everything works fine. If you invoke the program like
this:
program | tee logfile

then you may not see the first output for a long time or until the
program finishes (which might be "never" for a daemon).
Even in situations like this it usually isn't necessary to flush after
every output operation. If for example you have 2 output operations one
after the other in the code you typically don't need to flush after the
first one. Generalising this you only need to flush output when the
program might be blocked or busy doing other things for an extended
period.

Or when you have produced a complete message that you want to be seen
even if no other messages are generated for a long period of time.
In that case you can flush output before you read input or do
anything else that might block, or perform a significant calculation. In
many cases you can simplify this to flushing output before reading input,
often at a single point in the code.

This is a good point: if you don't flush output before doing input,
you sometimes end up seeing the prompt only AFTER you've answered it.

Gordon L. Burditt
 
A

Alan Balmer

Or when you have produced a complete message that you want to be seen
even if no other messages are generated for a long period of time.

As in writing a trace message when you don't yet know exactly where
the program crashes ;-)
 
E

Eric Sosman

Gordon said:
[...]
This is a good point: if you don't flush output before doing input,
you sometimes end up seeing the prompt only AFTER you've answered it.

In this situation it's not a "prompt," but a "tardy."
 
C

CBFalconer

Gordon said:
.... snip ...

This is a good point: if you don't flush output before doing input,
you sometimes end up seeing the prompt only AFTER you've answered it.

My suggestion is simply to automate the flush with a macro:

#define promptf(f, s) do {fputs(s, f); fflush(f); } while (0)

Note that we can't use puts because of the extraneous \n. It might
be better implemented as an inline function when that is available.

inline int promptf(char *s, FILE *f)
{
fputs(s);
return fflush(f);
}

and a define

#define prompt(s) promptf(s, stdout)
 
L

Lawrence Kirby

If you are doing something time-consuming, it is common to output
something to indicate that the program is alive and actually doing
something. For example, it might output "Loading data\n" followed
by the message "Loaded %d records\n" for every thousand records it
loads. There are also programs that output a single period every
so often, again to indicate progress, or output a percentage and/or
estimated time to completion followed by a carriage return (so, on
many systems, it looks like the percentage is being periodically
updated). Or the program might be logging error messages related
to stuff it collects in real time. If you invoke the program
directly, everything works fine. If you invoke the program like
this:
program | tee logfile

then you may not see the first output for a long time or until the
program finishes (which might be "never" for a daemon).

OK, yes, those are the situations we are trying to avoid.
Or when you have produced a complete message that you want to be seen
even if no other messages are generated for a long period of time.

The logic I suggested already covers this case, i.e. if the code performs
operations which may block or otherwise take a long time it first flushes.
This may not be simple or appropriate to implement in all circumstances
but it often is.
This is a good point: if you don't flush output before doing input,
you sometimes end up seeing the prompt only AFTER you've answered it.

This approach helps buffering by not flushing more than necessary. It
might flush when the output buffer is empty but that's likely to be a
cheap operation. As well as lower call overhead this can improve network
operations by moving data in larger chunks.

Lawrence
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
474,164
Messages
2,570,901
Members
47,439
Latest member
elif2sghost

Latest Threads

Top