Some people say sizeof(type) and other say sizeof(variable).
Why?
I've been reading the thread discussing 'sizeof' and its usage with
interest. I've gone back and forth a bit in my own development
practices for using sizeof, and the discussion has sharpened
my views enough so I'm ready to offer my $0.02 now.
Executive summary - prefer sizeof(x), sizeof(*p) to sizeof(type).
First, the starting question, 'sizeof(type)' or 'sizeof(variable)'?
Looking through several hundred thousand lines of code, I'm convinced
'sizeof(variable)' [or 'sizeof(expression)'] is almost always better
than 'sizeof(type)'. Using 'sizeof(type)' is similar to using "magic
numbers" in open code - better to relegate those few cases that really
need to do 'sizeof(type)' to #defines, and then use the symbolic name.
Second, to parenthesize or not parenthesize? I think the easiest way
to think of 'sizeof' is as a special kind of macro, much like the
special macro 'offsetof()', and just always use parentheses. I know
that technically that isn't right, but it's easy to explain, easy to
remember, and reduces cognitive load when reading code. What about
people who use the 'sizeof x' form, without parentheses? I wouldn't
object to that too strongly, kind of put it in the category of writing
indexing with the integer on the outside, eg, 0[p] -- experts do it,
and if one wants to be considered an expert one should know about it,
but it's not necessary to use it just because you can.
Third, related topic, sharing a little macro definition that I've
found useful:
#define bitsizeof(x) (sizeof(x) * CHAR_BIT)
Of course this needs a #include <limits.h> to work properly.