|> > What risks do you see?
|> 1. Code becomes quickly difficult to understand. Or worse: it makes
|> you think you understand it, when you actually don't.
You don't need function overloading for that
.
|> 2. As any other kind of superfluous facility, it's an incentive to
|> use and abuse it.
As you say, it is true for just about any facility. It's a negative
point; to justify any facility, you have to provide offsetting positive
points.
|> 3. There can be some nasty "ambiguous overloading" bugs. I've seen
|> it many times in C++ code. Not all compilers are able to catch them.
|> It can especially happen with operator overloading.
The answer to that is simple: don't use C++'s complex rules for overload
resolution.
|> 4. It makes you "overload" functions that you think are
|> functionnally similar. It may later on turn out that they are not so
|> similar, but by the time you realize that, hundreds of other
|> functions may depend on them. A hell to maintain.
See point 1. You can misname functions just as badly without
overloading.
|> 5. It's usually some overhead at run-time (although this point may
|> not qualify as a "risk"). And a huge overhead at compile time, which
|> has its importance when you work on very big projects (might not
|> want to wait for hours while your code compiles...)
Overloading is fully resolved at compile time. Runtime overhead is 0.
The amount of compile time overhead depends on the complexity of the
overload resolution rules (see point 3) -- although it is never totally
free, it doesn't have to be huge.
|> 6. Speaking of overhead, they often imply security issues through
|> the use of "v-tables". I'm not getting into details here, but many
|> papers have been written on the subject.
I think you're thinking of virtual functions, not overloading.
Overloading doesn't involve v-tables or any runtime mechanism.
In the end, like everything else, it is a tradeoff. Generally speaking,
concerning the positive aspects, I would say:
- It is almost essential to write good mathematical software (and some
business software). But for that, you need not only function
overloading, but full user defined types, with operator overloading
as well. What good is being able to write sin(aBigDecimal) if I have
to write bigDecimal1.add( bigDecimal2 ), rather than bigDecimal1 +
bigDecimal2?
- It can be very useful in certain cases of generic programming: C++
templates or perhaps some fancy macro generated code in C.
Offsetting those advantages is that the overloading rules almost have to
be complicated in C/C++, given the number of implicit conversions the
languages support. And it isn't so much overloading itself that costs,
it is the complexity of the overload resolution rules, which means that
1) the compiler has a lot of work to do, and 2) even more important, it
becomes a real guessing game for whoever reads to program to know what
function actually is going to be called.
I work mostly in C++, where I have overloaded functions. From
experience, I really only use them in the following cases:
- constructors -- C++ requires all constructors for a given class to
have the same name, so you don't have much choice,
- smart pointers -- this is a very C++ specific technic for resource
management, and totally irrelevant in C, and
- my BigDecimal class, which implements full the decimal arithmetic I
need for some commercial applications.
IMHO, only the last is in the least way relevant to C, and I'm not sure
that just simple function overloading is the best answer (in C).