Comparision of C Sharp and C performance

S

spinoza1111

I've re-quoted the part where I explained this.


No.  In fact, memory management of any sort would cost time.

The problem was that in Spinny's code, each of the recursive calls
used to calculate factorial(19) would also perform an iterative calculation
of factorial(N) for N <= 19.  So instead of calculating factorial(19)
iteratively and being done with iteration, it also calculates factorial(18)
iteratively, factorial(17) iteratively, factorial(16) iteratively, and so
on.

This has been explained to you several times. The purpose was to
execute so many cycles that it would be impossible to use an
interpreter to execute it: yet C Sharp executes the code only ten
percent slower than the C code because it's not (as many people here,
including you, claim) an interpreter.
 
S

spinoza1111

Some do, some don't, and it depends on the processor too.  Depending
on when that edition was written, it might be that all Java VMs still
only had bytecode interpreters.

I don't know whether this is true of Java, but it's untrue of
Microsoft, and my example shows the result: code with many cycles
(that recalculates a factorial iteratively in each recursive call for
each factorial in order to be a CPU hog) runs only ten percent slower
on C Sharp .Net.
 
S

spinoza1111

spinoza1111  said:
A little [formal] learning isn't a dangerous thing: it goes a long
way, in fact, and part of the problem is that people like Scripto Boy
think it's useless. I can certainly see why the guy with a small
number of academic chops wanted them mentioned, because he didn't want
to get shit on. But Herb has a large number of academic chops, so I
don't see your point.
You are right, in that this disdain for academic qualifications, is part
and parcel of the dogma of this (and other) newsgroup(s).
I speak from experience, in 2 senses:
   1) Because there is *some* merit to it.
   2) Because I've often been there myself.
Again, I point this out to indicate that I really do understand where
the regs are coming from, having been there myself.  But, again
echoing St. Paul, we must grow up at some point.

I do wish you well in your search for maturity.

"Maturity" in the corporation means a dismal acceptance of eternal
subordination.
 
S

spinoza1111

Which misses my point.  I was correcting someone else's
misunderstanding about the differences between two pieces of C code.
I was not commenting on why you wrote what you did.  Not everything in
a thread you start is about you.
\
No, it isn't. But I should be paid for the vigorous debates I start.
 
S

spinoza1111

...
A little [formal] learning isn't a dangerous thing: it goes a long
way, in fact, and part of the problem is that people like Scripto Boy
think it's useless. I can certainly see why the guy with a small
number of academic chops wanted them mentioned, because he didn't want
to get shit on. But Herb has a large number of academic chops, so I
don't see your point.

You are right, in that this disdain for academic qualifications, is part
and parcel of the dogma of this (and other) newsgroup(s).

I speak from experience, in 2 senses:
    1) Because there is *some* merit to it.
    2) Because I've often been there myself.

Again, I point this out to indicate that I really do understand where
the regs are coming from, having been there myself.  But, again
echoing St. Paul, we must grow up at some point.

Well, I think I speak for a number of hard-working programmers who
have returned to school at night only to have people without their
work ethic and knowledge use back-stabbing and office politics to get
ahead. Seebach here is simulating office politics because he started a
rumor about Schildt, and here he doesn't have the courage to debate me
directly but prefers, like the office politician, to make snide
remarks to third parties.
 
S

spinoza1111

http://gcc.gnu.org/onlinedocs/gcc-4.4.2/gcc/G_002b_002b-and-GCC.html#...

The language-independent component of GCC includes the majority of the
optimizers, as well as the “back ends” that generate machine code for
various processors.

The part of a compiler that is specific to a particular language is
called the “front end”. In addition to the front ends that are
integrated components of GCC, there are several other front ends that
are maintained separately. These support languages such as Pascal,
Mercury, and COBOL. To use these, they must be built together with GCC
proper.

Most of the compilers for languages other than C have their own names.
The C++ compiler is G++, the Ada compiler is GNAT, and so on. When we
talk about compiling one of those languages, we might refer to that
compiler by its own name, or as GCC. Either is correct.

Historically, compilers for many languages, including C++ and Fortran,
have been implemented as “preprocessors” which emit another high level
language such as C. None of the compilers included in GCC are
implemented this way; they all generate machine code directly. This
sort of preprocessor should not be confused with the C preprocessor,
which is an integral feature of the C, C++, Objective-C and Objective-C
++ languages.

Caution: the above material after the link and before THIS point is a
QUOTE. It looks like something I might write because it's literate.
 
S

Seebs

Do that. I dare say that the seismic tremors rippling out from
the raising of your voice will wash across the pond and leave
Seebach awash with dismay and shame. Then again, perhaps not.

Well, it's entirely possible that, especially if his book has done better,
they'd react in some way. I'm not sure what. But hey, if it hurts me, THEN
we can actually talk to a lawyer about defamation cases, depending on what
he says. In theory, truth is a defense in defamation cases, but I don't
think I am especially worried about pointless speculation like "what will
happen if Spinny says something true".

-s
 
J

jw

I've re-quoted the part where I explained this.


No.  In fact, memory management of any sort would cost time.

The problem was that in Spinny's code, each of the recursive calls
used to calculate factorial(19) would also perform an iterative calculation
of factorial(N) for N <= 19.  So instead of calculating factorial(19)
iteratively and being done with iteration, it also calculates factorial(18)
iteratively, factorial(17) iteratively, factorial(16) iteratively, and so
on.

-s

Understood. Thanks Ben and Seebs!
 
N

Nick Keighley

On Jan 4, 5:51 pm, Nick Keighley <[email protected]>


That would if true mean that each time a statement was executed it
would have to be scanned and parsed, and I don't believe that's what
Little C does. Therefore, it's a small compiler.

I must admit I'm of the "it's a grey area" school of thought. A
compiler produces code that is executed on a machine. An interpreter
directly executes the source code. Once various virtual machines are
added to the mix the line becomes less distinct. Pascal p-code
machines and older Java (pre-JIT) translators look like compilers.
Sinclair Spectrum BASIC was interpreted. Someone in this thread made
the distinction that a significant transformation was performed on the
source code if it was a compilation. Perhaps the question is can the
transformation be easily reversed (de-compilation from gcc output
would be hard). "Structure nad Interpreation of Programs" has some
interesting thoughts on the subject.

Is Schildt's C compiler/interpreter easily available? Is it available
online or would I have to try and get hold of the book?
 
S

spinoza1111

Well, it's entirely possible that, especially if his book has done better,
they'd react in some way.  I'm not sure what.  But hey, if it hurts me, THEN
we can actually talk to a lawyer about defamation cases, depending on what
he says.  In theory, truth is a defense in defamation cases, but I don't
think I am especially worried about pointless speculation like "what will
happen if Spinny says something true".

Besides truth, there's the question of who initiated the name-calling.
I asked you respectfully to consider withdrawing "C: The Complete
Nonsense": in response you called me a "moron" whereas I stuck to the
facts about what you did.

However, I will hold off contacting our published until I see the
revision of "C The Complete Nonsense" you have promised. When will
that be available?
 
S

spinoza1111

Do that.  I dare say that the seismic tremors rippling out from
the raising of your voice will wash across the pond and leave
Seebach awash with dismay and shame.  Then again, perhaps not.

I am not trying to "leave Seebach awash". Instead, I am trying to get
him to withdraw his case against Schildt. I was successful in getting
the wikipedia article corrected, and I'm going to succeed here.
 
N

Nick

Nick Keighley said:
I must admit I'm of the "it's a grey area" school of thought. A
compiler produces code that is executed on a machine. An interpreter
directly executes the source code. Once various virtual machines are
added to the mix the line becomes less distinct. Pascal p-code
machines and older Java (pre-JIT) translators look like compilers.
Sinclair Spectrum BASIC was interpreted. Someone in this thread made
the distinction that a significant transformation was performed on the
source code if it was a compilation. Perhaps the question is can the
transformation be easily reversed (de-compilation from gcc output
would be hard). "Structure nad Interpreation of Programs" has some
interesting thoughts on the subject.

The Sinclair Spectrum is a nice example. As a line was typed it was
transformed: keywords were stored as single bytes (the top-half of the
"character" set) - that they'd been typed in as special key-presses
helped of course, and numeric constants were followed by a marker byte
(that meant "don't display the next 5 bytes") and then the number
transformed into binary. So what was interpreted was a lot more
"binary-like" than the visible BASIC.

I'd not call that compilation though!
 
P

Processor-Dev1l

A C and a C Sharp program was written to calculate the 64-bit value of
19 factorial one million times, using both the iterative and recursive
methods to solve (and compare) the results
Here is the C code.
#include <stdio.h>
#include <time.h>
long long factorial(long long N)
{
    long long nFactorialRecursive;
    long long nFactorialIterative;
    long long Nwork;
    if (N <= 2) return N;
    for ( nFactorialIterative = 1, Nwork = N;
          Nwork > 1;
          Nwork-- )
        nFactorialIterative *= Nwork;
    nFactorialRecursive = N * factorial(N-1);
    if (nFactorialRecursive != nFactorialIterative)
       printf("%I64d! is %I64d recursively but %I64d iteratively wtf!
\n",
              N,
              nFactorialIterative,
              nFactorialRecursive);
    return nFactorialRecursive;

int main(void)
{
    long long N;
    long long Nfactorial;
    double dif;
    long long i;
    long long K;
    time_t start;
    time_t end;
    N = 19;
    K = 1000000;
    time (&start);
    for (i = 0; i < K; i++)
        Nfactorial = factorial(N);
    time (&end);
    dif = difftime (end,start);
    printf("%I64d! is %I64d: %.2f seconds to calculate %I64d times
\n",
           N, Nfactorial, dif, K);
    return 0; // Gee is that right?

Here is the C Sharp code.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
namespace N_factorial
{
    class Program
    {
        static void Main(string[] args)
        {
            long N;
            long Nfactorial = 0;
            TimeSpan dif;
            long i;
            long K;
            DateTime start;
            DateTime end;
            N = 19;
            K = 1000000;
            start = DateTime.Now;
            for (i = 0; i < K; i++)
                Nfactorial = factorial(N);
            end = DateTime.Now;
            dif = end - start;
            Console.WriteLine
                ("The factorial of " +
                 N.ToString() + " is " +
                 Nfactorial.ToString() + ": " +
                 dif.ToString() + " " +
                 "seconds to calculate " +
                 K.ToString() + " times");
            return;
        }
        static long factorial(long N)
        {
            long nFactorialRecursive;
            long nFactorialIterative;
            long Nwork;
            if (N <= 2) return N;
            for ( nFactorialIterative = 1, Nwork = N;
                  Nwork > 1;
                  Nwork-- )
                nFactorialIterative *= Nwork;
            nFactorialRecursive = N * factorial(N-1);
            if (nFactorialRecursive != nFactorialIterative)
                Console.WriteLine
                ("The iterative factorial of " +
                 N.ToString() + " " +
                 "is " +
                 nFactorialIterative.ToString() + " " +
                 "but its recursive factorial is " +
                 nFactorialRecursive.ToString());
            return nFactorialRecursive;
        }
    }

The C Sharp code runs at 110% of the speed of the C code, which may
seem to "prove" the half-literate Urban Legend that "C is more
efficient than C Sharp or VM/bytecode languages in general, d'oh".
As I take pains to point out in my book, "Build Your Own .Net Language
and Compiler" (Apress 2004) (buy it now buy it now), it's not even
grammatical to say that a programming language is more "efficient"
than another pl.
But far more significantly: the ten percent "overhead" would be
several orders of magnitude were C Sharp to be an "inefficient,
interpreted language" which many C programmers claim it is. That is
because a true interpreter parses and/or unpacks each instruction when
it is executed, and both of the above examples execute their
instructions millions of times.
Were C Sharp to be interpreted, the above C Sharp code would run very,
very slowly, but C Sharp isn't interpreted.
Instead, a one-time modification is made to the byte code upon loading
to thread the codes together. This explains part of the ten percent
"overhead". For the remainder of execution, a sort of switch statement
is operating in which the code for individual byte codes using go to
to transfer control. This means that C and C Sharp execute at the same
effective rate of speed, and the ONLY efficiency-based reason for
choosing C is avoiding the initial overhead of setting up the .Net
virtual machine.
But what does this virtual machine provide? Almost 100 percent safety
against memory leaks and many other bad things.
Indeed, C is like the (unwritten) British constitution. In that
arrangement, Parliament cannot "entrench" an act that would bind all
subsequent Parliaments, because Parliamentary supremacy (like the
putative power of C) must at all costs be preserved: this was the
innovation of 1688/9, when Parliament hired King William and his
Better Half as Kingie and Queenie on condition that they be nice and
obey Parliament. This means that in fact the British constitution
contains no protection against a runaway, tyrannical, "long"
Parliament. It promised not to do so in 1911 and confirmed that it
would be nice in 1949, but there is nothing in the British
constitution to prevent Parliament from enacting a new bill, as long
as it could get enough Peers in the House of Lords to wake up and
permit it to do so (Lords approval being required unlike money bills),
and HM the Queen to give Royal Assent.
When Kurt Godel was studying the booklet given him in Princeton to
pass the US Citizenship test, he claimed to find a bug that would
allow America to be a dictatorship. I think he'd be even more
terrified of the British constitution, for like his self-reflexive
paradoxical statement in his incompleteness/inconsistency result, the
very power of Parliament renders it impotent to write a Constitution!
Whereas .Net and Java provide "Constitutional" safeguards against code
doing nasty things even as the American constitution was intended to
be, and to some practical extent is, "a machine that runs of itself".
Both constitutions can fail, but the British constitution is more
likely to. It enabled Margaret Thatcher to rule by decree and override
even her own Cabinet, and ramrod through a medieval "poll tax" in 1990
that produced civil disturbances. Britons enjoy human rights mostly
through the EU. Whereas misuse of the American constitution during
Bush's administration was more vigorously resisted especially in its
courts, where the judges are truly independent.
It is true that a massive "bug" in the American constitution developed
in 1860 with the outbreak of civil war, but this was extra-
Constitutional. It resulted from a deliberate misinterpretation of
state's rights under the Tenth Amendment in which the states retained
a "nullifying" level of sovereignity, but their assent to the
Constitution in 1789 had itself nullified this strong interpretation
of "state's rights".
Since 1689, no such "bug" has occured in the British constitution.
However, the British constitution existed before 1689, and its bug was
just as serious, for it produced the English civil war. This was
because there is no provision in the British constitution for a pig-
headed king, and King Charles II could conceivably in the future
refuse Royal Assent to needed legislation, or use the British Army
(which is NOT under the control of Parliament, but of the Monarch to
whom officers swear fealty) against his own people.
C Sharp programs can fail as can the American Constitution. But the
idiotic equation of the reliability of C and C Sharp in fact resembles
the political passivity of Britons who talk darkly of the EU being a
"new world order" destroying their "rights as Englishmen" when in fact
it's the best thing that ever happened to them. And, I've not
addressed how the rights of Irishmen have been abused under the
British constitution.
I'm for one tired of the Urban Legends of the lower middle class,
whether in programming or politics.

OT, but *much* more interesting than your rambles about "C:The
Complete Nonsense", etc.
What about the memory footprint of C vs C#? Sure, .Net code is
compiled rather than interpreted so it doesn't have a huge time
penalty - but it is compiled to an intermediate language designed to
run in a *huge* virtual machine. Bloatware which is only slightly
slower than nonbloated code is still bloated and still slower. Also, a
much more interesting question (than simple factorial calculations) is
how something like Microsoft Excel would work if its code base was
rewritten from C to C#. I suspect that the cummulative slowdowns would
result in a spreadsheet which was annoyingly slower.

Out of curiousity, what rules of grammar does the sentence "C is more
effecient than C#." violate?

well, it is not the point... C# code is compiled into the intermediate
language, but when the "bytecode" is run, it is compiled into common
assembly which is run as a common C product. C# can never replace C,
that is logical, but it is still very powerful language combining
either HLL classes, libraries, etc. and LLL pointers, COM, etc
(Microsoft calls this unsafe code). While c# can never be faster and
better optimized than C, it is still better than many other HLL
languages which either compile the intermediate code in every loop and
doesn't support low level things like pointers.
 
D

Dennis \(Icarus\)

well, it is not the point... C# code is compiled into the intermediate
language, but when the "bytecode" is run, it is compiled into common
assembly which is run as a common C product. C# can never replace C,

You can have the c# compiler generate native code as well.

<snip>

Dennis
 
B

Ben Bacarisse

Walter Banks said:
Schildt published several variations of little C most are on line.

http://tripatlas.com/Herbert_Schildt

Links to the code from that page are now dead.

The zip file includes one file for all the code in chapter 29. This
file would have to edited and probably split into the correct parts.
It is not a simple matter to find the components and to get them to
build. I stopped trying after a few minutes but no doubt someone
could manage it.
 
W

Walter Banks

Ben said:
Links to the code from that page are now dead.

I checked the links earlier

The links to the C and C++ books code were live
but the links to the original Dr. Dobbs article were dead
The zip file includes one file for all the code in chapter 29. This
file would have to edited and probably split into the correct parts.
It is not a simple matter to find the components and to get them to
build. I stopped trying after a few minutes but no doubt someone
could manage it.

Quote from the first link I posted.

To use this code, the individual source files will have to be
extracted from the listings for chapter 29: parser.c, littlec.c,
and libc.c. These files need to be compiled into a single
executable (there are no header files). Some of the code
is actually demo input for the C interpreter, not code that
belongs to the interpreter itself.

Sample makefile:


CC=gcc
CFLAGS= -Wall -g -c
OBJ=libc.o parser.o littlec.o
all: littlec
littlec: $(OBJ)
$(CC) -o littlec $(OBJ)
libc.o:
$(CC) $(CFLAGS) libc.c
parser.o:
$(CC) $(CFLAGS) parser.c
littlec.o:
$(CC) $(CFLAGS) littlec.c
clean:
rm -f littlec $(OBJ)

w..
 
B

Ben Bacarisse

Walter Banks said:
I checked the links earlier

The links to the C and C++ books code were live
but the links to the original Dr. Dobbs article were dead

As far as I can see the page you pointed to does not link to the Dr
Dobbs article. The article is still online at:
http://www.ddj.com/184408184 but the listing would have to cut and
pasted or otherwise scraped from the page.

The links to the code from the article are dead. That is all I meant.
Is there a working link from that page to the Little C code that
avoids the problems with the code from the book chapter?
Quote from the first link I posted.
<snip quote from the page>

I don't want to sound rude, but I /can/ read! The book code for
chapter 29 is one large file with no obvious way to find the files
parser.c, littlec.c and libc.c. The file has lines marking "listing
1" up to "listing 25". Some can be easily identified, but others are
not obvious. Some functions from Little C appear in more than one
listing so presumably parts of Little C are developed as the charter
progresses. Am I missing something obvious about how to pull the code
form the listing file?

I think it could be done by using the listing from the journal article
to find the best match from the chapter listing, but I stopped when I
decided that might be the best way to go.
 
W

Walter Banks

Ben said:
As far as I can see the page you pointed to does not link to the Dr
Dobbs article. The article is still online at:
http://www.ddj.com/184408184 but the listing would have to cut and
pasted or otherwise scraped from the page.

The links to the code from the article are dead. That is all I meant.
Is there a working link from that page to the Little C code that
avoids the problems with the code from the book chapter?

<snip quote from the page>

I don't want to sound rude, but I /can/ read! The book code for
chapter 29 is one large file with no obvious way to find the files
parser.c, littlec.c and libc.c. The file has lines marking "listing
1" up to "listing 25". Some can be easily identified, but others are
not obvious. Some functions from Little C appear in more than one
listing so presumably parts of Little C are developed as the charter
progresses. Am I missing something obvious about how to pull the code
form the listing file?

listing 3 parser.c
listing 9 littlec.c
listing 18 lclib.h
listing 19 lclib.c
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
474,102
Messages
2,570,645
Members
47,245
Latest member
ShannonEat

Latest Threads

Top