No, I disagree. They need to know what the language says about the
various types so they can use the correct one. The cases where one needs
to know are very rare.
Unless they try to access a disk file with a given structure, or read a
network packet with a given structure, not an uncommon task (to say the
least). In such a case there are fixed sizes that are needed. And in
general programming there are fixed sizes that are needed.
I took a peek at a bit of your code (the sha1 utility) and it breaks on
my machine for just this reason -- an assumed integer size rather than
using a type with a known size. (That's after fixing a couple of calls
to a function where the wrong number of arguments are passed. Your
compiler should have spotted that.)
The core of the SHA-1 engine was taken from public domain code. I have
left it unchanged in terms of syntax. I have altered the definition of
the various types to the forms defined at the head of sha1.cpp so that
they can be redefined based on the requirements of a particular flavor
of C. The algorithm comes with its own self-test to determine if it
is working properly. Compiling my sha1.cpp program with _TEST_ME
defined allows it to be stand-alone. Otherwise it can be used as an
#include file.
To address your issues, I have bypassed this limitation in C with my
own mechanism. There is a definition at the header which allows typedefs
for determining what native variable type for the compiler is used for
the fixed size entities I use. And, being as C is lackadaisical in
this area, you will have to manually redefine those typedefs for your
particular version of C (to maintain as is indicated in their type
(the bit size, such as 8 for u8, 32 for u32, and so on).
From the top of sha1.cpp:
typedef unsigned long long u64;
typedef unsigned long u32;
typedef unsigned short u16;
typedef unsigned char u8;
typedef long long s64;
typedef long s32;
typedef short s16;
typedef char s8;
typedef float f32;
typedef double f64;
These work for Visual C++. If you use another compiler, re-define them
to work for your platform. Were you using RDC, they would be native
types and there would not be an issue. Ever.
C has that, or very nearly that, depending on exactly what you mean.
The bonus that C brings (albeit at the expense of complexity) is that
you can use plain int where you know it will work, with a reasonable
assurance that it will be fast by being the "natural" integer type for
some hardware.
Yeah, I don't care about that. I want my ints to always be 32-bit,
something I do care about (because I'm accessing data transmitted
across a network, or read from disk, and it comes with 32-bit and
64-bit quantities I need to explicitly access).
Computers today are blazingly fast on nearly everything they do. There
are components which make them slower than they actually are (reading
data from disk, a network, waiting for user input, etc.), but they are
amazing processor engines.
FWIW, I would rather have slower code operating in parallel than faster
code operating in serial. Intelligent design, not reliance upon hardware.
As was said about a decade ago when the MHz wars ended: "The free lunch
is over. Now it's time to do real programming."
Quite. It does not seem to meet your needs. Why are you using it?
Because, at present, it is the best tool for what I'm trying to accomplish.
Once I complete RDC, I will never look back ... except or when I also
desire to bring a true C standard into an add-on, so that existing C
code will compile without alteration using the spec. Prayerfully it will
be someone else who ultimately codes that engine. For me, it's about 9th
on my list of things to do:
RDC/VXB++
Visual FreePro
Whitebox
Journey database engine
Exodus-32
Armodus-23
Exodus-64
Armodus-64
Other languages, including C, ported to the RDC compiler framework.
Best regards,
Rick C. Hodgin