It doesn't really solve the problem. One of the most dangerous
features of C is the ability to typedef a basic type to something
like, say DWORD. Then you find yourself rewriting perfectly good code,
just because someone decided to put DWORDs where they really meant
"int", and the code no longer runs under the particular operating
system where DWORDs are used.
Do you have the same problem with int32_t? The only difference is
that one is *standardized* and the other 'DWORD' is not. The
underlying concept is the same; you want a double-word on
architectures that support 16-bit integers and those that support 32-
bit integers.
The benefit of typedefs is that it allows a module writer to define
types with stricter semantics. In this case, 'DWORD' has stricter
semantics because it is required to be 32-bits; an 'int' type does
not.
One can also use typedefs to define a common name to a type with
specific semantics. The example I commonly use is using an typedef of
'int' to define a type to represent a month.
typedef int greg_month;
Even though 'int' is the base type, the added semantics placed on
'greg_month' is that it is only allowed to store values from 1 to 12,
and perhaps -1 to represent an error state. Writing a function API
using these typedefs increases developer comprehension.
void create_report( struct my_report* report,
struct my_data* data,
greg_month start,
greg_month end );
One could certainly use 'int' in the start and end months, but
'greg_month' is a better visualization of its constraints than 'int'.
One sees 'greg_month' and knows that its implied range is from 1 to
12; the same cannot be said for using 'int'. Unfortunately C does not
provide automatic enforcement of those semantics, and the module
writer must take pains to code constraints to ensure that the values
passed into 'start' and 'end' match the desired semantics for the type
'greg_month'.
What one gains in developer comprehension becomes a flaw at the
integration stage. If one wants to use different libraries by
different people with different representations of the same thing, it
quickly becomes difficult to reconcile these differences. It is a
fundamental flaw of typedefs, as the plethora of boolean type styles
before standardization in C99 demonstrates.
Even so, I still believe the use of typedef to be a net positive even
though integration is a serious problem.
But it alleviates it, because it's easy to write a function that
operates on lists (a list is an ordered collection, usually of like
items) as taking an array and a count. It's hard to do anything
fancier, like wrapping the list into a structure with a "length"
member, creating a linked list, or semi-hardcoding the length of the
array with a preprocessor define. So the plugs might not fit the
sockets, but at least all the sockets are set up in a similar way.
Once you start allowing containers, that simplicity goes.
I consider simplicity and power at two ends of a spectrum. If one
must start from scratch, there is a lot to be said to start with
simplicity. But as the limitations of simplicity become apparent,
people will desire constructs with more power. And if power comes in
the form in a library that is standardized (STL), it is reasonable to
believe that one can use that standard to justify using a complex
library, just as one should prefer to use a C standard function.
That power comes with a cost, which is increased sophistication and
comprehension required for all developers using the library. I find
that this effect results in the increased stratification in the
expertise of C++ over C developers. Are you in the strata that can
properly use the STL, or template meta-programming, or designing class
hierarchies, or exception handling, and the list of features keep
growing.
I view C and C++ not as superior or inferior to one another, but as
points on this spectrum with their own advantages and disadvantages.
simplicity<----------------------->power
C++ is stretching their position to the right, while C tends to enjoy
its position where its at.
Best regards,
John D.