Forums
New posts
Search forums
Members
Current visitors
Log in
Register
What's new
Search
Search
Search titles only
By:
New posts
Search forums
Menu
Log in
Register
Install the app
Install
Forums
Archive
Archive
C Programming
Is C99 the final C?
JavaScript is disabled. For a better experience, please enable JavaScript in your browser before proceeding.
Reply to thread
Message
[QUOTE="Paul Hsieh, post: 1692272"] Given any definition of portable. There are no C99 compilers, compliant or otherwise. Portable in C means C89 minus the incompatibilities with C++. I mean portable in the same sense that the implementors of the Lua language considered C to be portable, but not to the degree that the JPEG people considered C to be portable (they wrote code that works on K&R C as well as ANSI C -- but K&R is something we need to leave behind.) And of course not what the GMP or Python people consider portable (i.e., "we ported it".) And of course you have to pay deference to the C++ people because there is scarcely a C compiler in existence today that isn't also a C++ compiler, and users of that language will not understand your pedantry when explaining that "my code is portable, its just that your compiler is not". Are there any braincells operating in your head? I typed "Lamport's Bakery Algorithm" into google. What do you possibly imagine I could have considered typing in other than that? Then you are just braindead? There's no lee way here. There's no other possible thing that I could possibly type into google to help me look this up. What were you expecting me to type in? "Worthless non-solutions to critical section?" "The idiots guide to multitasking incorrectly?" "Race conditions are us?" "Good looking algorithms laced with (not-so-well) hidden weaknesses?" But my assertion is independent of any reformlation. That's part of my claim, as I have posted. In other words, *YOU* could pick any reformulation you like \0 and test it against my claim. "Iterating over all processes", or self- identifying processes is central to the way the algorithm works. You brought it up without being familliar with it? We're like 6 posts into this and you still haven't addressed the very simple and obvious assertion that its not portable. And your banking everything on your ability to attempt to argue this fact with me? You just have no perspective on this. It does not even occurr to you that I'm not just spouting some "theory" that I just made up. Refereed papers are written about "lockless" or "semi-locked" ADTs. Still other are written about minimal hardware support (like just shared memory, or a 2-write atomic memory operation) ADTs. Now why would they bother if portable mutex implementations existed? Why does the VxWorks manual complain that some older PPCs have sub- standard support for multitasking? If it were emulatable, why wouldn't they just write software mutexes -- why wouldn't just explain that it might be a little slower? If you are implying that you don't remember the details then why bring it up as the center point of your "counter-example" to my claim of the obvious? I'd be curious (though only minorly so) as to what kind of possible toy example you actually made work in an acceptable way with such a mechanism. But that's not true either. Another mechanism that is used by the algorithm is the idea of shared variables. This requires atomic-by-type based implementations hardware implementations. For example, suppose on a multi- processor system, that 32-bit (or 64-bit) word sized datum was actually updated on a shared bus one 8-bit byte at a time in sequence. Then you can't declare the shared variables required to implement the "taken numbers", unless you appeal to some other mechanism to lock the bus. You will notice that "shared" is not a recognizable concept in ANSI C. Implementing correct mutexes have a *LOT* of requirements of the implementation. None of the typical "attempted" critical section algorithms that you are supposed to learn in an OS CS class as being examples of stuff that may or may not work satisfy *all* the requirements. Even the all hallowed Petersen algorithm requires an implementation of shared variables and only solves the case for two tasks. No matter what, multithreading support boils down to using some kind of hardware support, and there's none to be found in the ANSI C standard, implicit or otherwise. No ... it would make me someone who has actually visited a bakery with a "take a number" service system and participated in it. The algorithm described is exactly analogous to that (that's why is called that) so reading and understanding it is almost instantaneous. But all the same questions you have while waiting in line to place your order for rolls apply: 1) What happens if two people try to grab the same number at the same time? (shared variables) 2) What happens when they run out of numbers? (overflow/wrap-around) 3) What prevents someone from grabbing a number on behalf of someone else (analogous to how do you rigorously identify a process to a unique access to one number at a time.)? 30 seconds -- its too much time to spend on this triviality, especially considering its an inadequate solution. If I had to spend an hour waiting for service at a bakery, I would have taken my business elsewhere. Do you also contend that the moon landing was faked ... for now? Its not like there's any open or controvertial issue left with this dead horse. You only have to listen and understand what I am saying. Look up *ANY* description of the Lamport Bakery Algorithm -- my exposition of its flaws are inescapable and clearly there. And a transformation of it is not going to save it. Which is only a good side-effect of it -- the real reason for putting it into the standard library is for functionality reasons, and because of portability reasons you *HAVE* to put them in there if you want to use such functionality universally. I also insist that 1 + 1 == 2. There we go, just as predicted ... "something like ..." When did I say that? You have put forth an example that you claim counters my oh so questionable claim that you can't implement Mutexes portably. I spent the requisite 30 seconds of analysis to bust it and you are dancing around the issue like you have ants in your pants. The onus was on you the second you offered a counter example to my statement. You mean its something you are just unaware of. I offer for you to seek any authority or any resource to counter such a claim. And just citing yet another algorithm doesn't count -- I'm not here to do random research to debunk your misconceptions. Because you offered your argument before I argued a fully flushed out proposal. I have already admitted that I am not an expert on parsable grammars, but at least I am aware that parsable grammars exist. You said that my proposal was all but impossible based on this alone. If I propose a grammar that has a mistake in it, that doesn't mean my general idea is flawed, its just a reflection of the fact that I am not an expert on this one narrow field. So, so far I have refrained from doing that. Contrast and compare that with your suggesting the Lamport Bakery Algorithm as a proposal for a portable Mutex. Besides the fact that its stupid and I was able to kill it instantly, it doesn't *prove* anything either way, which I think was probably your intention all along. Its obvious to someone who understands multitasking. Spending hours trying to learn a "take a number" based service system is perhaps indicative of something otherwise. There we go ... "or something similar". Right down the expected path. So when I bust some modification that you are inevitably going to make, will that "prove" something to you? Or will you simply offer amendment after amendment knowing that my 30 second kills don't prove anything? Well, then why have I spent half a dozen long winded posts trying to explain the completely obvious, without one iota of success? The entire onus is on you, yet every post I try to make it clearer and clearer what is so ridiculously obvious. Why don't you try taking all arguments seriously, then decide if they are actually serious or not *after* you have looked at them? You mean you want me (an admitted non-expert, and therefore intentionally side stepping it) to come up with a parsable grammar as proof that there exists a parsable grammar on arbitrary symbols? Well, if tht's what it takes ... You are forgoing legitimate platforms. See, screwing over VAX and old CDCs for choosing 1s complement -- that's one thing. They are both obsolete, and *everyone* understands that that was just a design mistake. But multiple stacks? Its not obvious in any way that multiple stacks are necessarily incorrect. You see many pure RISCS such as Alpha, PPC, and MIPS have nothing in their architecture that fits a unified stack (not true of x86, AMD64, IA64 or Sparc.) This is especially true of embedded systems, as I mentioned, that can have non-uniform data stores. So if you want to screw over architectures by proposing amendments to the standard, doesn't it make sense to limit the problems to the minimum possible number of architectures, and ideally, on architectural ideas that won't be moving forward? The purpose of a generic programming language is not to dictate what algorithms can or can't be implemented in it. I'm merely proposing that having multiple stacks may be the best solution for certain kinds of problems so a proposal which models the system with a unified stack will mischaracterize an otherwise acceptable and valuable computational platform that *isn't* so hampered with the way the specification exists today. Well I'm waiting ... You don't understand. The compiler has to try to solve the halting problem just to *provide* the information. Either you know even less than I do about parsable grammars or you just have no comprehension of what I've just said. You start with a number of symbols (+,- ,<,>,=, ... etc) and you wish to create a non-ambiguously parsable grammar on them. This is not in the realm of "open problems" this is in the realm of "homework exercise". Well ... I *do* have a reference. Its called the C++ standard. As far as *I* can tell, there is no outcry in that community against operator overloading -- STL would lose a lot of its potency without them. You can't imagine just the littlest bit of frustration on my part? I've spent a half dozen posts trying to explain a very simple point, and you are just not getting it at all. How many parameters does this take? What is a tree, and what about this function name tells me that these are nodes of a tree? List? A global list? A list supplied as a parameter? Again, how many parameters does this function take? Do you mean prime polynomial? Prime over a specific ring? This is a standard library function. Why didn't you also include a standard operator such as "+" in this list? # has special meaning to the preprocessor and probably should not be included. @| would be better. Does this use coroutine, message passing, self-blocking, or mutex semantics? And again how many parameters does such a function take? ^ means exclusive or in the C language. % has the meaning of modulo. Well, the fact that you didn't chose the operators very carefully, and that even reciting the function names wont tell you their semantics/limitations or even how many parameters they take. At least with operators you know the number of parameters no matter what. You see with operators its possible, for example, to partition them into unary versus binary. So using regex notation (ignoring the need to \ escape +, *, ? and ^ inside of a [...]), imagine that we start with a grammar such as: Associative binary operators: [@$%^<>]*[@$%][|^+*&] [@$%^<>]*[^<>][|^] Not (necessarily) associative binary operators: [@$%^<>]*[@$%^][-/<>] [@$%^<>]*[^<>][/] Unary operators: [@$%^<>]*[@$][~!] Three inputs (like the (a ? b : c) operator): [@$%^<>|+*&]*[%^<>|+*&][?]+ [@$%^<>|+*&]*[:]+ Two inputs, two outputs (for the carry, or high word multiplies) [@$%^<>|+*&]*[%^<>|+*&][:]+ [@$%^<>|+*&]*[?]+ So in a sense you would know something about them without looking at their actual definition. So one could omit the "like" or "before/after" clauses from my original definition and say that the operator inherits the order of precendence from the operator they are most closely derived from (the rightmost character, excluding : or ?.) The trick in describing the two output operator scheme is that the second output parameter would be treated as if it had a C++ & ref operator on it. Example: int _Operator @: unsigned int low *? (int a, int b) { long long unsigned x = ((long long) a) * ((long long) b); low = (unsigned int) x; return (unsigned int) (x >> 32); } so this "output parameter" could not have a "const" on it. And the syntax for usage would be: high @: low = a *? b; with the idea of the "=" being there to mark a clear division between what's written to and what's read from. And the degenerate form: high @:= a *? b; follows a familliar C-like pattern. Of course, I was trying to be conservative but I may have missed a parsing ambiguity in there somewhere. I also almost certainly have missed entire patterns of symbols, and I think it would just be a matter of adding them in there. Assuming I made no errors, or assuming that they are easily fixed, does it still look impossible or even all that challenging to add these to the parser, or even lexer (as someone else mentioned)? Ok -- I'll give you that one. In my own primitives I use this: #define ADD_GEN(dst, s1, cf) { \ dst += s1; \ cf = s1 > dst; \ } (where typ is some unsigned integral type) which I always assumed only worked because of 2s complement. In any event, I am unaware of any C compiler that can reduce this to one instruction by the "as-if" rule. And of course carries also come out of shifts, and a simple comparison operation isn't going to help there. But this isn't an *algorithm* its tautology. But you can add them with no trouble ([URL]http://bstring.sf.net/[/URL], [URL]http://www.and.org/vstr/[/URL], [URL]http://www.annexia.org/freeware/c2lib/[/URL].) In particular there were no real extensions I needed to the language to make "the better string library" work well. I could have used an expand() heap function as an alternative to realloc(), but I found a mostly reasonable asymptotic alternative solution. And besides a lot of people are under the impression that you can use C's built-in strings adequately for real strings, and for trivial cases, or ridiculous investments into programming around their limitations, you can. But the point is that it also doesn't provide a way to implement them that is practical or useful from a performance point of view. Writing such libraries will give solutions that are either ridiculously slow (using public-school algorithms only), or extremely convoluted and non-portable. Thus there is a real different between bignums and strings. But how is it forced if every hardware platform supports it? Because the standard has nothing to offer me? I've gone round and round on the potential for 1s complement with other people in this or the comp.std.c newsgroup in the past. My conclusion is that it should be treated like 8088, and 68k were treated by the C standard on the issue of including float point -- i.e., where the functionality is compelling enough and enough platforms support it, it is acceptable to push it in there anyway and force the weaker platforms to emulate it. In general, there is no forward looking reason to pay deference to 1s complement for any practical reason. I'm really interested in seeing you make contributions of a technical nature here too -- I mean besides the occasional function that is obviously wrong, or specifications that include fundamental logic errors in them. I already gave the formula: ((a << 32) + b)*((c << 32) + d) is the same as: ((a*c) << 64) + ((a*d + b*c) << 32) + b*d So long as we are capable of a widening multiply and a carry capturing adder, its not hard to organize these: result[0] = low(b*d) result[1] = high(b*d) + low(a*d) + low(b*c) result[2] = carry(high(b*d) + low(a*d) + low(b*c)) + high(a*d) + high(b*c) + low (a*c) result[3] = carry(carry(high(b*d) + low(a*d) + low(b*c)) + high(a*d) + high(b*c) + low (a*c)) + high(a*c) So you can see, that all it takes is the ability to capture carries out of additions and the both the high and low part of a widening multiply. This transforms two entry long "bignums" into a 4 entry long result for multiplication. The more general answer is just more of the same. [/QUOTE]
Verification
Post reply
Forums
Archive
Archive
C Programming
Is C99 the final C?
Top