T
TonyMc
I confess I have always taken an ad hoc approach to error handling in
programs that I write. I think that is justified since they are written
only for my own use, so if I enter strange data and the program calls
exit(EXIT_FAILURE) after printing a message to stderr, I can cope with
that and try again.
However, for my own learning, I am interested in perhaps starting to use
a more systematic approach to error handling and treating unexpected
conditions. So far I have the following functions and macros:
void syserror(const char *fname, int line, const char *fmt, ...);
void syswarning(const char *fname, int line, const char *fmt, ...);
void error(const char *fname, int line, const char *fmt, ...);
void warning(const char *fname, int line, const char *fmt, ...);
#define SYSERROR(...) syserror(__FILE__, __LINE__, __VA_ARGS__)
with similar #defines for the other three functions.
In my program, I then do something like:
if ((fp = fopen(fname, "r")) == NULL)
SYSERROR("Can't open input file %s.", fname);
The sysxxxx() functions differ from the xxxx() functions in using
strerror() to print the system-specific error message from the errno
value. These are useful only for handling errors generated by library
functions that set errno to a useful value. The error() and warning()
functions are for problems encountered in my own code rather than in
calls to the os. The error() variety calls exit(), the warning()
variety just sends a message to stderr and returns.
So, is that the sort of scheme that others use? Are there obvious
problems with it? And what alternatives do people use? I have seen
jumps to cleanup code at the end of a function, setjmp/longjmp (which, I
confess, does my head in - it feels like going back in time) and
obviously there are much more robust and sophisticated techniques which
attempt to fix the problem and continue rather than simply exiting or
issuing a message. I'm interested to know what others do. Also, if you
can point me to some resources that discuss different strategies in C,
that might be helpful.
Thanks,
Tony
programs that I write. I think that is justified since they are written
only for my own use, so if I enter strange data and the program calls
exit(EXIT_FAILURE) after printing a message to stderr, I can cope with
that and try again.
However, for my own learning, I am interested in perhaps starting to use
a more systematic approach to error handling and treating unexpected
conditions. So far I have the following functions and macros:
void syserror(const char *fname, int line, const char *fmt, ...);
void syswarning(const char *fname, int line, const char *fmt, ...);
void error(const char *fname, int line, const char *fmt, ...);
void warning(const char *fname, int line, const char *fmt, ...);
#define SYSERROR(...) syserror(__FILE__, __LINE__, __VA_ARGS__)
with similar #defines for the other three functions.
In my program, I then do something like:
if ((fp = fopen(fname, "r")) == NULL)
SYSERROR("Can't open input file %s.", fname);
The sysxxxx() functions differ from the xxxx() functions in using
strerror() to print the system-specific error message from the errno
value. These are useful only for handling errors generated by library
functions that set errno to a useful value. The error() and warning()
functions are for problems encountered in my own code rather than in
calls to the os. The error() variety calls exit(), the warning()
variety just sends a message to stderr and returns.
So, is that the sort of scheme that others use? Are there obvious
problems with it? And what alternatives do people use? I have seen
jumps to cleanup code at the end of a function, setjmp/longjmp (which, I
confess, does my head in - it feels like going back in time) and
obviously there are much more robust and sophisticated techniques which
attempt to fix the problem and continue rather than simply exiting or
issuing a message. I'm interested to know what others do. Also, if you
can point me to some resources that discuss different strategies in C,
that might be helpful.
Thanks,
Tony