messagenews:
[email protected]... [...]
lol. Anyway, some people might compare the type of programming I do
with
"rocket science", or perhaps compare it to juggling running chainsaws.
Well, let's see if you can accomplish a task that I consider to be
fairly
easy. I want to create a reader/writer lock that has wait-free
fast-paths on
both the writer and reader side. I want this lock to be 100%
starvation-free
for both writers and readers. I want bounded time for both readers and
writers. Your not allowed to use CAS. After you accomplish that, I want
you
to augment your algorithm to allow for conditional reader-to-writer
upgrades, and non-conditional writer-reader downgrade. You may use CAS
for
the upgrade/downgrade, but not for reader/writer acquisition or release.
BTW, your not allowed to use any loops in the algorithm. Oh, one more
thing.
I will make it really easy for you in that you do not have to document
any
membars. Once you accomplish this task, then we can begin to discuss
some
more exotic synchronization algorithms...
I'm sorry Chris but I don't feel
Like reinventing another wheel
Creating high-performance reader writer lock is not a waste of time. Heck,
even Sun Microsystems has an entire group working on synchronization
primitives. And guess what? They just released a paper on high-performance
read writer locks. They are using distributed counting technique to increase
scalability. There work in not in vain.
This is not my area of expertise.
You MAY, from where I sit, not be aware that it MAY be impossible to
actually do this on arbitrary platforms. If the lock is a single
instruction on your system in machine language the code isn't portable
to another system.
Actually, my synchronization code is portable to a fairly wide range of
architectures. I know the caveats, and I know how to work around them.
I am unqualified to debate these issues with you at this time. I
studied the subject in graduate school but IMO at the time, the field
of low-level code was too inchoate and inarticulate, with manufacturer
greed and arrogance creating a toxic smog of disinformation.
Everybody seemed to have their own axe to grind, and I could never
tell the difference between a valid technical claim and OEM greed
speaking through a hired hand. I vastly preferred compiler development
since programming language arose like human speech itself, outside
market forces.
But I concede that you have done significant work. My question
remains: was it your talent and genius that did the work, or some
magical property of C? I would say it is the former.
I use a mixture of C and assembly language. C is low-level enough to allow
me to do what I want. Tell me, if I used C sharp, how could I ensure that,
for instance, data is padded to a L2 cache line and aligned on a L2 cache
line boundary? Do you know why that's important wrt synchronization
That seems to this relative layperson pretty silly. It festishizes a
"cache boundary" which is not a law of nature, but the result of
another developer wanting, for a good reason or bad, to do bit shifts
rather than addition or subtraction...I think.
It's absurd to reason from such a specific need to the conclusion that
C is great.
algorithms? Don't get me wrong, C# is a fine language and I do use it from
time to time. But, I can reap much better performance if I use C and ASM to
create synchronization libraries. I can then use that library in C#,
everybody wins.- Hide quoted text -
You need some sociology to go with all those bits and bytes. Critical
theory teaches us that modern society needs for its continued
viability to present a "naturalized" face to the ordinary person.
Social structures which were questioned in the French revolution, in
1830, in 1848 and in the Paris Commune need to be thought of a
"reality", and this is the reason why programmers think inside the
box.
I've never written an OS exclusively in a high level language, but I
believe it's been done. Those efforts fell by the wayside because they
would have in their day commodified mainframes, and this was
unacceptable. Therefore, programmers learned to build in all sorts of
dependencies in their code in order to make it a unique product that
would not interoperate, until Open Source. But by that time, the
damage had been done. Countless architectures had been built around
the quirks of C, for example.
Elvis Costello has a song about the Thatcher era: all this useless
beauty. Much of technology is useless beauty because if we'd done it
right in the first place, there would be no need for all this
Beautiful Code.
A society with a World Government and a World Mainframe running Cobol
would be boring, but would probably feature a three day work week.
Everyone secretly prefers such a fantasy to the one we have which is
in the process of going arse up, in some small part thanks to "new
paradigms" like C without originality that produce much heat but
little light in the long run.
[OK, not Cobol. Some language as yet unwritten. Like mine. Yeah,
right. Well you get the picture: we wanted socialism but like Captain
America and Billy in Easy Rider, we blew it.]
Let me return however to a more technical subject. You are saying
something true GIVEN the existing state of technology: that languages
like C# and Java with a runtime cannot run by themselves without an
underlying codebase that does deep and nasty things in ASM and C.
For C# or Java to work all the way down to the bare metal, the bare
metal would have to be a C# or Java machine perhaps with microcode to
check for type safety, and the "computer designed for the language"
would walk again as it did pre-Risc.
But this means that there's nothing special about C. It was the
madness or the wisdom of a crowd.