Fair enough, if those are your only choices. Quite often, however,
the cost of verification is insignificant. Choosing the *fast*
library could be a premature optimization. Now if I actually had two
libraries that did the same thing *and* I was really really concerned
with speed *and* I did performance measurements on both *and* there a
measurable difference, then I might go with the one that doesn't
verify its inputs. How often does that happen?
It's pretty easy to come up with examples of functions that are
trivial if you don't check the arguments and nontrivial if you
do. An excerpt from a library of mine is below. Each of these
functions is only one or two machine instructions when inlined.
If the functions were modified to check for null pointers and for
invalid pointers, then they would at least double in code size
(making them less suitable as inline functions) and presumably in
execution time also.
Elsewhere in this thread, Jacob Navia suggested using 64-bit
magic numbers to mark memory regions of a particular type. That
would also increase the size of this data structure by 40% with
the C implementation that I most commonly use. Perhaps not
fatal, but definitely significant.
/* A node in the range set. */
struct range_set_node
{
struct bt_node bt_node; /* Binary tree node. */
unsigned long int start; /* Start of region. */
unsigned long int end; /* One past end of region. */
};
/* Returns the position of the first 1-bit in NODE. */
static inline unsigned long int
range_set_node_get_start (const struct range_set_node *node)
{
return node->start;
}
/* Returns one past the position of the last 1-bit in NODE. */
static inline unsigned long int
range_set_node_get_end (const struct range_set_node *node)
{
return node->end;
}
/* Returns the number of contiguous 1-bits in NODE. */
static inline unsigned long int
range_set_node_get_width (const struct range_set_node *node)
{
return node->end - node->start;
}