在 2012å¹´2月15日星期三UTC+8上åˆ2æ—¶17分43秒,Ian Collins写é“: [..]
They are somewhat bigoted.
Actually, one of the dominant goals of GNU/Linux is availability on a
wide range of platforms. To bootstrap the development tool chain, some
core functionality needs to be implemented, i.e. library code, cross
compiling code generators for the target architecture, hardware
drivers, code to load and start the Linux core. A major milestone in
porting an OS to any platform is to get it to compile its own kernel,
the rest is more transpiration than inspiration.
Unfortunately only major platforms offer C++ compilers, and if they
do, there is wide range of language implementation issues that make
writing common source code APITA. Not so long ago, just to pick one
example, the popular Visual C++ Compiler 6 had a very unique
interpretation on the scope of variables declared in the loop control
header of for loops. You CAN write code in a way so it runs unchanged
on other platforms, i.e. by wrapping each and every for loop in an
extra pair of curly braces, but it looks ugly and is easy to forget.
Most of the time, you end up in an orgy of #ifdef's. The latter is
true for the C implementation of Linux, too.
C, however, is more readily available on smaller platforms (embedded
systems, mobile phones, ...) and has less portability problems just
because the language core is so much smaller. I believe this was the
main argument of the C vs. C++ factions in Linux development, not
issues regarding memory consumption (there is, in fact, no such issue)
and not performance worries. I am not aware of any problem solvable in
C, that needs more run time when implemented properly (!) in C++.
If you write an OS for one platform only, or for a very limited set of
related platforms, I see no real issue why C++ should not be used for
writing the majority of OS code.
For example, the hash or map library in C++
are unreasably large and some are just faking
to be a hash table not of O(1) when there is  no hash collision at all.
Even with hash collisions, the amortized run time of hash tables is
O(1). A std::map is usually implemented as some sorted tree structure
with search and insertion costs O( log n ) time, because of the
requirement to get a sorted order traversal in O(n) time. This is not
possible with hash maps.
A hash can be frozen at the compile time
is not well-distinguished from a map that can
be modified in the run time very often.
This took so long for C++ to implement
a true hash table.
Hash maps took so long to get into the standard because the design of
the hash function interface deserves a lot of deliberation - its
already tough to find a hash function for a data type like "subset of
int". You need to know the domain and relative frequency of keys well
to make a good choice. Data types with no or vague identifying key
data are even more difficult. Often the binary representation of a
part of the data structures is used as hash function input in real
world implementations, but for a universal library? - eeks.
The second issue was the uncomfortable decision to use closed
addressing (linear searched chains in hash buckets) for collision
handling. For many years already I use my own hash map implementation
with open addressing (double hashing on *very* carefully chosen hash
table address spaces). It does insertion and look up between two and
three times (!) faster on average than any closed addressing
implementation I've seen. It does not handle arbitrary deletions,
though, only deletion in exact reverse order of insertion is
supported. Not suited for a standard library that emphasizes on
orthogonality - all algorithms should run on all containers.
MiB.