Comparision of C Sharp and C performance

S

spinoza1111

OSS is hardly slave labour.  Many people in that scene get paid for
their work.  It's just a community effort.  Of course if this is how
you play ostrich then so be it.


Citation needed.

OO was developed because of insufficient code reuse. In my own book, I
show how I developed each distinct piece of the puzzle was developed
as something that could be independently tested, and reused. An Amazon
reviewer (A Customer at
http://www.amazon.com/Build-Your-NE...ageNumber=2&sortBy=bySubmissionDateDescending)
writes:

"This book contains a huge amount of code that has obviously been
under development and evolving for a long time. The code has a level
of documentation, error checking and self-consistency testing that is
rare to see even in commercial code, much less sample code for a
book."

You see, OO (which C does NOT have) allowed me to craft the compiler
in a few months in 2003, since I used basic software (such as software
to parse words and do arithmetic in nondecimal bases) which I'd
written in the Nineties. I didn't have to worry about this code
breaking something remote since it existed in static libraries.

On this foundation I was able to build encapsulated stateful objects
such as the scanner, the representation of a data type, and the
representation of a variable at run time. Each one of these is
deliverable as testable, stand alone.

The kicker is that I worked in Visual Basic (.Net). Apart from its
lack of modern syntactic sugar, its clunky syntax is in fact block-
structured, with the blocks being expressed in a very different way
from C. The key was the structure imposed by the .Net object oriented
environment.
So first it's that there are no solutions in C to the problem.  Now
there are too many?

It is rather paradoxical but true, but if one's truly worried about
software reliability (which few programmers are in fact) then a large
set of solutions, any one of which can with a great deal of
probability be buggy owing to the aliasing and "powerful" nature of C,
constitutes a null set UNTIL you have done your homework and
determined the subset of "solutions" that work. We see this all the
time here, with constant wars over which C solution is "best" or even
works.

For example, we're not supposed to expect a basic malloc() to work
recursively, and we're not supposed to use sprintf(). The large number
of tools is actually the problem.

Every time I read the following words of the midcentury sociologist
and musician Theodore Adorno, I am astonished by his prescience as
regards computing:

"Their intellectual energy is thereby amplified in many dimensions to
the utmost by the mechanism of control. The collective stupidity of
research technicians is not simply the absence or regression of
intellectual capacities, but an overgrowth of the capacity of thought
itself, which eats away at the latter with its own energy. The
masochistic malice [Bosheit] of young intellectuals derives from the
malevolence [Bösartigkeit] of their illness."
I don't get what your rant is.  By your logic we shouldn't trust the
code you produced since we didn't write it.

Please don't, since it was produced to illustrate things off the cuff.
Also, if I import version X of an OSS library then NOBODY is changing
it on me...


Um, the stack of the threads is where you typically put cheap per-
thread data.  Otherwise you allocate it off the heap.  In the case of
the *_r() GNU libc functions they store any transient data in the
structure you pass it.  That's how they achieve thread safety.

It's a clumsy and old-fashioned method, not universally used. It also
has bugola potential.

You see, the called routine is telling the caller to supply him with
"scratch paper". This technique is an old dodge. It was a requirement
in IBM 360 BAL (Basic Assembler Language) that the caller provide the
callee with a place to save the 16 "general purpose registers" of the
machine.

The problem then and now is what happens if the caller is called
recursively by the callee as it might be in exception handling, and
the callee uses the same structure. It's not supposed to but it can
happen.

In OO, the called "procedure" is just a separate object instance with
completely separate work areas. The caller need do nothing special,
and this means that things are less likely to break under maintenance.
Whereas your "solution" violates a central tenet of good software: the
caller should NOT have to do anything special based on the internal
needs of the callee.

Your solution partitions library functions that need scratch memory
from pure functions. Such apartheid always creates problems, for
example where the library developers discover that a library routine
needs (or does not need) state.
It's like in OOP where you have a fresh instance of a class per
thread.  The class has public/private data members that are unique to
the instance.  OMG C++ HAS NO THREAD SAFETY!!!


malloc() is thread safe in GNU libc.  It can fail/succeed in multiple
threads simultaneously.  What's your point?

That's nice to know. But not all mallocs are create equal, are they.
I didn't know that (nor do I assume to know that, being the victim of
a joe-job myself I don't trust anything most people say w.r.t.
identities unless they prove it through other means).  That being
said, why not reserve your posts for more positive or constructive
contributions instead?

Each thread I start begins with a positive contribution. Then the regs
**** with me and I don't take it lying down.
 
S

spinoza1111

I don't care about the "regs,"  in this thread I only care about what
YOU are trying to pass off as knowledge.


Defend yourself by being right then.  If you're trying to make
arguments actually make sure they're sound and well reasoned instead
of just shotgunning stupidity and seeing what sticks.


Which is ironic given how much of your day to day life is probably the
result of C programs...

ROTFLMAO

Oh great C how have I offended thee?
You power the ATM which won't give me money!
I abase me before thee oh C thou God
You help make the Body Butter I put on my bod.

Give. Me. A. Break. The daily life of ordinary people is on balance
MESSED UP by computer systems, especially computer systems programmed
by thugs who refuse to learn modern languages designed for
reliability. Starting with the financial models that assembled the
"securitized mortgages" so carelessly as to cause a massive financial
crash.

I really, really hope that the next time I fly an Airbus its avionics
are in Ada...not C, and it's likely they are, since Europeans feel no
special loyalty towards C.
So far it seems to be you saying C sucks, and nobody caring.  I don't
care if you program in C or not.  I'm only replying here because you
posted some nonsense comparison and are trying to pass it off as
science.  You're a fraud.


So why bother the comparison?  If you knew your algorithm in your C
program was not comparable why bother?

That'd be like comparing bubble sort in C# to qsort in C ...

My point was that the two programs took roughly the same TIME to do
the same THING, but the C program had a performance issue. It did so
because like the C Sharp program, it used the most direct method.

In C, if you don't have the time to reuse code, you do it yourself,
hopefully based on a model in Knuth (or Schildt). The simplest and
most obvious way to hash is to use the adjacent cells, not the linked
list.

In C Sharp the simplest way to hash is to select the ONE library that
is shipped with .Net.

That silly "1984" Apple ad, with the Father talking about the "one
right way" and the cute runner smashing the glass, has overinfluenced
minds here. It's silly (and Theodore Roszak made this point at the
same time as the Apple ad in a small essay called "From Satori to
Silicon Valley") to believe that high technology can be a matter of
"freedom"; it seemed to Roszak that it's about control, and
exacerbating social inequality. C programmers bought into the illusion
that they could be "free" and "code the way they wanted" and the
result was a forensic mess: one is constantly learning that this or
that function (such as sprintf()) can't be used by grave "libertarian"
lawn trolls whose abstract committment to freedom is in fact belied by
the necessarily thuggish way in which they have to ensure that what
they THINK is right, is followed.

That is: if you kill the Father you still have Big Brother.
Well the Windows kernel has a C interface for the syscalls.  So at
some point something has to call that.  So chances are good that the
C# runtime is based on top of the C runtime.

Your point being. My experience inside IBM and what I know of
Microsoft is that development groups who do this sort of work use
proprietary code generators which might generate C but which force
programmers to code a certain way.
 
T

Tom St Denis

OO was developed because of insufficient code reuse. In my own book, I
show how I developed each distinct piece of the puzzle was developed
as something that could be independently tested, and reused. An Amazon
reviewer (A Customer athttp://www.amazon.com/Build-Your-NET-Language-Compiler/product-review...)
writes:

Amazon reviews are bullshit most of the time. I have published two
books of my own [which I won't cite]. And for one of them I had
people carrying over from usenet [when I was being joejobed]
protesting the book. So what? Review comments are meaningless.

Code reuse is NOT a sole feature of OO programming languages.
You see, OO (which C does NOT have) allowed me to craft the compiler
in a few months in 2003, since I used basic software (such as software
to parse words and do arithmetic in nondecimal bases) which I'd
written in the Nineties. I didn't have to worry about this code
breaking something remote since it existed in static libraries.

If I were to write a quick parser [beyond trivial complexity] today
I'd use flex and yacc... tools written years ago to do such a job... I
don't have to re-invent the wheel.
On this foundation I was able to build encapsulated stateful objects
such as the scanner, the representation of a data type, and the
representation of a variable at run time. Each one of these is
deliverable as testable, stand alone.

Ya, and?
The kicker is that I worked in Visual Basic (.Net). Apart from its
lack of modern syntactic sugar, its clunky syntax is in fact block-
structured, with the blocks being expressed in a very different way
from C. The key was the structure imposed by the .Net object oriented
environment.

I'm happy for you. How does any of this have any bearing whatsoever
on you misrepresenting computer science throughout this thread?
It is rather paradoxical but true, but if one's truly worried about
software reliability (which few programmers are in fact) then a large
set of solutions, any one of which can with a great deal of
probability be buggy owing to the aliasing and "powerful" nature of C,
constitutes a null set UNTIL you have done your homework and
determined the subset of "solutions" that work. We see this all the
time here, with constant wars over which C solution is "best" or even
works.

And there aren't competing libraries/classes in Java and C#? This is
more naive nonsense.
For example, we're not supposed to expect a basic malloc() to work
recursively, and we're not supposed to use sprintf(). The large number
of tools is actually the problem.

How does malloc() work recursively? Like what does that even mean?
Can you show some C code that recursively calls malloc()?
It's a clumsy and old-fashioned method, not universally used. It also
has bugola potential.

The alternative is you pass things globally. And I don't get how

myclass::member(...) { this.integer = 5; }

Is any diff from say

myclass_member(struct self *this, ...) { this->integer = 5; }

In terms of data storage. If you write

myclass mc;
mc.member();

Or

struct self mc;
myclass_member(&mc);

They're both essentially on the stack, and you're passing data as
such.
In OO, the called "procedure" is just a separate object instance with
completely separate work areas. The caller need do nothing special,
and this means that things are less likely to break under maintenance.
Whereas your "solution" violates a central tenet of good software: the
caller should NOT have to do anything special based on the internal
needs of the callee.

Um, if your function needs transient data, the callee has to maintain
that or you can NEVER be thread safe. Suppose I have a class with a
function doWork(void); How do I tell doWork() what data to work on?
It'll be a member of the class right? And who maintains the class?
The caller.

You're writing nonsense again...

Generally speaking if a function needs scratch memory that is not used
between calls the function itself will maintain it, not the caller.
The structure you pass to the GNU libc *_r() functions is not scratch
memory though. It's your damn data management structures...
That's nice to know. But not all mallocs are create equal, are they.

malloc() has a very prescribed behaviour. For platforms which support
threading [which is not part of C99] they MUST support thread-safe C99
libc functions.
Each thread I start begins with a positive contribution. Then the regs
**** with me and I don't take it lying down.

This thread [and all posts therein] contain nothing but your asinine
lies, fabrications, and other manufactured "facts." I'm not fucking
with you, I'm just pointing out that you've been repeatedly dead
wrong. Try posting things that are true and people will stop
correcting you.

Tom
 
T

Tom St Denis

My point was that the two programs took roughly the same TIME to do
the same THING, but the C program had a performance issue. It did so
because like the C Sharp program, it used the most direct method.

But not even. You could have used a hash library, one which even
comes with GNU libc to do the same work. You chose not to. That's
nobody's fault but yours. And even then you used the least efficient
configuration possible. You even admitted as much to. So your "test"
proved nothing except you don't know how to run a test.
In C, if you don't have the time to reuse code, you do it yourself,
hopefully based on a model in Knuth (or Schildt). The simplest and
most obvious way to hash is to use the adjacent cells, not the linked
list.

Yes, but if I knew I had 1000 entries to hash I wouldn't have used a
hash with only 1000 buckets. I would have used one with say 2000 or
more (so you're not 100% full). And I sincerely, with every fibre of
my being doubt that Knuth endorses a "wing it" approach to software
development. And even if he did, he is NOT the authority on software
engineering. He's a computer scientist for sure, far beyond my own
standards, but he isn't [as far as I know] much of a software
developer in terms of writing maintainable code, documentation, etc,
etc.

And regardless, this has nothing to do with C itself. You could
ascribe that development model to ANY language. So keep your
criticism specific to C itself.
In C Sharp the simplest way to hash is to select the ONE library that
is shipped with .Net.

What if the supplied hash library isn't suitable for what you need to
hash? What if a tree is more suitable? What if it's a specific form
of RB-tree or ... I sincerely doubt C# comes standards with every form
of data management technique known to man.

Can I conclude from that then that C# sucks because it doesn't have
splay tree functionality as part of the standard library?

Tom
 
S

Seebs

And I sincerely, with every fibre of
my being doubt that Knuth endorses a "wing it" approach to software
development. And even if he did, he is NOT the authority on software
engineering. He's a computer scientist for sure, far beyond my own
standards, but he isn't [as far as I know] much of a software
developer in terms of writing maintainable code, documentation, etc,
etc.

Actually, he sort of is -- in the sense that he developed an entire new
model of programming in order to write software that he could maintain
with very, very, few bugs. If you get a chance, pick up and read his
book on literate programming; I'm not 100% sold on it, but you can
hardly argue with the maintainability and reliability of TeX.
Can I conclude from that then that C# sucks because it doesn't have
splay tree functionality as part of the standard library?

No, you can only conclude that Spinny is not really arguing from
anything but emotion.

-s
 
K

Keith Thompson

Tom St Denis said:
On Dec 30, 11:39 am, spinoza1111 <[email protected]> wrote:
[more of the same]
[...]
This thread [and all posts therein] contain nothing but your asinine
lies, fabrications, and other manufactured "facts."

Tom, "spinoza1111" has been told this N times, for some very large
value of N. What makes you think that his response to the N+1th
attempt will be any different?

You are wasting your time (and ours) by attempting to reason with him.
 
S

Seebs

OSS is hardly slave labour. Many people in that scene get paid for
their work. It's just a community effort. Of course if this is how
you play ostrich then so be it.

In the interests of full disclosure: $DAYJOB is the Linux side of Wind
River Systems. I get paid for writing open source software, and we do
indeed distrubute most of what I write under open source licenses. Not
because of linking with other open source software, but because management
believes that it's good citizenship to contribute back to the community
as well as using free code.

We contribute back to projects, from bug reports to patches. One of
my cowokers has put a ton of work into implementing debugger
support in the Linux kernel, and more importantly (to the community)
into making that support clean enough for upstream inclusion.
I didn't know that (nor do I assume to know that, being the victim of
a joe-job myself I don't trust anything most people say w.r.t.
identities unless they prove it through other means). That being
said, why not reserve your posts for more positive or constructive
contributions instead?

Because he can't make any, I think?

-s
 
K

Keith Thompson

Tom St Denis said:
On Dec 30, 11:39 am, spinoza1111 <[email protected]> wrote:
[more of the same]
[...]
This thread [and all posts therein] contain nothing but your asinine
lies, fabrications, and other manufactured "facts."

Tom, "spinoza1111" has been told this N times, for some very large
value of N. What makes you think that his response to the N+1th
attempt will be any different?

You are wasting your time (and ours) by attempting to reason with him.

So why are you reading this thread?

Hmm. I don't have a good answer to that.
 
K

Kenny McCormack

[email protected] (Richard Harter) said:
[more of the same]
[...]
This thread [and all posts therein] contain nothing but your asinine
lies, fabrications, and other manufactured "facts."

Tom, "spinoza1111" has been told this N times, for some very large
value of N. What makes you think that his response to the N+1th
attempt will be any different?

You are wasting your time (and ours) by attempting to reason with him.

So why are you reading this thread?

Hmm. I don't have a good answer to that.

Looks like another leaky killfile.
 
S

Seebs

Hmm. I don't have a good answer to that.

I have something of one:

The fact is, you can learn a lot more sometimes from understanding why
something is wrong than you would from understanding why it is right.

I've seen dozens of implementations, recursive and iterative alike, of
the factorial function. Spinny's was the first O(N^2) I've ever seen,
and it actually highlights something that may be non-obvious to the
reader about interactions between algorithms.

Watching people who do understand something explain why someone who
doesn't is wrong can be very enlightening. We recently had someone in
comp.lang.ruby who was advocating that the language be changed to round
outputs "near zero" to zero exactly so that sin(180) would produce the
correct exact value of zero rather than some other value. In fact, this
turns out to be an extremely bad idea -- but the discussion of *why* it's
bad taught me more about floating point.

-s
 
S

spinoza1111

He's not talking about the technique used in BAL etc.  The
transient data is contained within a structure that is passed by
the caller to the callee.  The space for the structure is on the
stack.  Recursion is permitted.

If control "comes back" to the caller who has stacked the struct and
the caller recalls the routine in question with the same struct, this
will break.

But the main point is that the callee isn't a black box if I have to
supply it with storage.
 
S

spinoza1111

And I sincerely, with every fibre of
my being doubt that Knuth endorses a "wing it" approach to software
development.  And even if he did, he is NOT the authority on software
engineering.  He's a computer scientist for sure, far beyond my own
standards, but he isn't [as far as I know] much of a software
developer in terms of writing maintainable code, documentation, etc,
etc.

Actually, he sort of is -- in the sense that he developed an entire new
model of programming in order to write software that he could maintain
with very, very, few bugs.  If you get a chance, pick up and read his
book on literate programming; I'm not 100% sold on it, but you can
hardly argue with the maintainability and reliability of TeX.

Here, Tom engages in a strange offset or shift I've seen in the real
world. This is to defend bad practice by denying that scientific
leaders have any credibility in a slightly offset, slightly redefined
"real world" in which the "best" practitioners are not "scientists":
yet strangely deserve the respect ordinarily given scientists.

The unacknowledged fact: Knuth is seen by the ordinary techie as being
the "champ", or "contendah" he could have been, with long-term
economic security guaranteed not by the impoverishing "free market"
but by tenure. This causes the ordinary techie to slightly redefine
software quality to make himself feel better about a world in which
it's just true that he's been made a perpetual candidate for his own
post and a perpetual infant.

The literate code of TeX is then re-read as somewhat pretentious and a
waste of resources.

It is true that Knuth's literate programming reduces bugs. He observed
that they still occur but diminish exponentially when the programmer
is free to develop as needed his own approaches, including in Knuth's
case the writing of essays about code while the code is being written,
something that Dijkstra did as well.

This is not done in industry. First of all, we need only to look at
the extraordinarily low level of literacy at postings here to realize
that most "senior" people have lost the ability to be coherent, so
much so that provably accurate grammar is to them a marker of mental
disorder. Secondly, essays are a "waste of time" in the infantile
corporate playbook.
No, you can only conclude that Spinny is not really arguing from
anything but emotion.

I am almost as disgusted with .Net as I am with C, since computer
systems in general are designed to make the rich, richer. I don't
love .Net, and emotion has nothing to do with my preferences. The only
time my emotions are engaged is when you maliciously attack people.
 
S

spinoza1111

Tom St Denis said:
On Dec 30, 11:39 am,spinoza1111<[email protected]> wrote:
[more of the same]
[...]
This thread [and all posts therein] contain nothing but your asinine
lies, fabrications, and other manufactured "facts."
Tom, "spinoza1111" has been told this N times, for some very large
value of N.  What makes you think that his response to the N+1th
attempt will be any different?
You are wasting your time (and ours) by attempting to reason with him.

So why are you reading this thread?

Because like "respectable" white Americans, photographed staring at
the strangled and burned black body at a Lynching, they are attracted
to the firelit circle, the screams, the blood and glass.
 
S

spinoza1111

I have something of one:

The fact is, you can learn a lot more sometimes from understanding why
something is wrong than you would from understanding why it is right.

I've seen dozens of implementations, recursive and iterative alike, of
the factorial function.  Spinny's was the first O(N^2) I've ever seen,

Again, if you'd studied computer science in the university
environment, in which practitioners are not being infantilized by
being made perpetual candidates, you'd realize that order(n^2) is not
a evaluative or condemnatory term. It was pointed out to you that the
purpose of the code was to execute instructions repeatedly, and
discover whether C Sharp had an interpretive overhead. It was pointed
out to you that had it such an overhead, the time taken by the C Sharp
would have itself order(n^2) with respect to the C time.

It was pointed out to you that the reason for starting order n^2 was
to factor out random effects from a too-fast program, to let things
run.

In university classes in computer science that you did not attend, a
qualified professor would have shown you that not only is the language
of computational complexity not evaluative, but also that we have to
write such algorithms to learn more.

You are making the same sort of ignorance-based attack you made years
ago on Schildt when you said "the 'heap' is a DOS term".
 
S

spinoza1111

Amazing isn't it. Tom comes rolling in here pronouncing his ill thought
out views, gets his arse handed to him on a plate pretty much, starts
huffing and puffing about the bleeding obvious (he has to be a Dilbert
character - something like "Obvious Tom" - you know with meeting
highlights such as "Let's not lose our focus on quality" or "This
solutions needs to be Engineered". He's another one of those MS hating
blowhards whose been allowed to blow his own self importance up a little
high.

Indeed. Tom believes "with every fibre of his being" that we must
write Quality software. But I guess I have fibres which demur. Bad
fibres.
 
S

spinoza1111

But not even.  You could have used a hash library, one which even
comes with GNU libc to do the same work.  You chose not to.  That's
nobody's fault but yours.  And even then you used the least efficient
configuration possible.  You even admitted as much to.  So your "test"
proved nothing except you don't know how to run a test.


Yes, but if I knew I had 1000 entries to hash I wouldn't have used a
hash with only 1000 buckets.  I would have used one with say 2000 or
more (so you're not 100% full).

Since I saw your mistake
I can this assertion make
It is a mistake I'd never make.

When I make a mistake
I go and eat a steak
And thought I take
How to hide my own mistake.
 
And I sincerely, with every fibre of
my being doubt that Knuth endorses a "wing it" approach to software
development.  And even if he did, he is NOT the authority on software
engineering.  He's a computer scientist for sure, far beyond my own
standards, but he isn't [as far as I know] much of a software
developer in terms of writing maintainable code, documentation, etc,
etc.

And regardless, this has nothing to do with C itself.  You could
ascribe that development model to ANY language.  So keep your
criticism specific to C itself.
In C Sharp the simplest way to hash is to select the ONE library that
is shipped with .Net.

What if the supplied hash library isn't suitable for what you need to
hash?  What if a tree is more suitable?  What if it's a specific form
of RB-tree or ... I sincerely doubt C# comes standards with every form
of data management technique known to man.

What part of "inheritance" don't you understand? Oh never mind...
 
M

Moi

Again, if you'd studied computer science in the university environment,
in which practitioners are not being infantilized by being made
perpetual candidates, you'd realize that order(n^2) is not a evaluative
or condemnatory term. It was pointed out to you that the purpose of the
code was to execute instructions repeatedly, and discover whether C
Sharp had an interpretive overhead. It was pointed out to you that had
it such an overhead, the time taken by the C Sharp would have itself
order(n^2) with respect to the C time.

It was pointed out to you that the reason for starting order n^2 was to
factor out random effects from a too-fast program, to let things run.

In university classes in computer science that you did not attend, a
qualified professor would have shown you that not only is the language
of computational complexity not evaluative, but also that we have to
write such algorithms to learn more.

The reason for the sub-optimal hashtable now is clear to me.

BTW, I fixed your hashtable for you. It is even shorter, and runs faster.
There still is a lot of place for improvement. Feel free to improve it.

*****/

#include <stdio.h>
#include <stdlib.h>
#include <time.h>
#define ARRAY_SIZE 1000
#define SESSIONS 100000

struct meuk {
int head;
int next;
int val;
} nilgesmeuk[ARRAY_SIZE];

#define COUNTOF(a) (sizeof(a)/sizeof(a)[0])

int main(void)
{
int val , *p;
time_t start , end;
double dif;
unsigned ii,slot,sessions,hops;
time (&start);
hops = 0;
for (sessions = 0; sessions < SESSIONS; sessions++)
{
for (ii = 0; ii < COUNTOF(nilgesmeuk); ii++) nilgesmeuk[ii].head = -1;

for (ii = 0; ii < COUNTOF(nilgesmeuk); ii++)
{
nilgesmeuk[ii].val = val = rand();
nilgesmeuk[ii].next = -1;
slot = val % COUNTOF(nilgesmeuk);
for( p = &nilgesmeuk[slot].head ; *p >= 0 ; p = &nilgesmeuk[*p].next ) {hops++;}
*p = ii;
}
}
time (&end);
dif = difftime (end,start);
printf("It took C %.2f seconds to hash %d numbers with %d hops(chainlength=1+%6.4f) , %d times\n"
, dif, (int) ii, hops, (float) hops/(ii*sessions), sessions);
return 0;
}

/*****

AvK
 
S

spinoza1111

Again, if you'd studied computer science in the university environment,
in which practitioners are not being infantilized by being made
perpetual candidates, you'd realize that order(n^2) is not a evaluative
or condemnatory term. It was pointed out to you that the purpose of the
code was to execute instructions repeatedly, and discover whether C
Sharp had an interpretive overhead. It was pointed out to you that had
it such an overhead, the time taken by the C Sharp would have itself
order(n^2) with respect to the C time.
It was pointed out to you that the reason for starting order n^2 was to
factor out random effects from a too-fast program, to let things run.
In university classes in computer science that you did not attend, a
qualified professor would have shown you that not only is the language
of computational complexity not evaluative, but also that we have to
write such algorithms to learn more.

The reason for the sub-optimal hashtable now is clear to me.

BTW, I fixed your hashtable for you. It is even shorter, and runs faster.
There still is a lot of place for improvement. Feel free to improve it.

*****/

#include <stdio.h>
#include <stdlib.h>
#include <time.h>
#define ARRAY_SIZE 1000
#define SESSIONS 100000

struct meuk {
        int head;
        int next;
        int val;
        } nilgesmeuk[ARRAY_SIZE];

#define COUNTOF(a) (sizeof(a)/sizeof(a)[0])

int main(void)
{
    int val , *p;
    time_t start , end;
    double dif;
    unsigned ii,slot,sessions,hops;
    time (&start);
    hops = 0;
    for (sessions = 0; sessions < SESSIONS; sessions++)
    {
        for (ii = 0; ii < COUNTOF(nilgesmeuk); ii++) nilgesmeuk[ii].head = -1;

        for (ii = 0; ii < COUNTOF(nilgesmeuk); ii++)
        {
            nilgesmeuk[ii].val = val = rand();
            nilgesmeuk[ii].next = -1;
            slot =  val % COUNTOF(nilgesmeuk);
            for( p =  &nilgesmeuk[slot].head ; *p >= 0 ; p = &nilgesmeuk[*p].next ) {hops++;}
            *p = ii;
        }
    }
    time (&end);
    dif = difftime (end,start);
    printf("It took C %.2f seconds to hash %d numbers with %d hops(chainlength=1+%6.4f) , %d times\n"
           , dif, (int) ii, hops, (float) hops/(ii*sessions), sessions);
    return 0;

}

/*****

AvK

Get back to you when I have more time to study this, but...

I don't know why you calculate what seems to be an invariant in a for
loop

I don't know why it's apparently necessary to calculate the size of
the array
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
474,091
Messages
2,570,605
Members
47,225
Latest member
DarrinWhit

Latest Threads

Top