[concerning memory leaks...]
I never had problems with memory leaks in C++. (not like with
C).
Interesting. I can't remember ever having had any problems with
memory leaks in C.
Ok, I am going to take this one up just one more time.
Beyond it, sorry, could care less.
I do remember one in C++, recently. Due to
a compiler error resulting in the compiler not calling a
destructor.
I bow down to you, oh, holey ghost.
(But it wasn't in code I'd written.
Sure. Understood. How COULD it be, considering how great
you are?
Had the code
been slightly cleaner, it wouldn't have triggered the compiler
error. But VC++ has problems with getting the destructor calls
right if you return in the middle of a loop.)
There are problems up to your gazoo in a more or less complex
app, at least from what I know, not you.
But most of the
leaks I've seen have been in Java.
WHAT?
Is it some kind of insult?
Are you going to fall THAT low?
Not because of the language,
but because of people trying to get the code out the door too
quickly, making the classic error of skipping on the design,
because "that's not what we're delivering".
Does not matter in Java more or less within the reasonable
set of limitations.
In Java memory leaks ARE possible. But THEORETICALLY.
Not practically. If you understand how gc works, you would
NEVER make such a statement.
Could you give me an example of the situation that will cause
a memory leak in Java? I'd be curious to see that one.
And I can give you an example of memory leaks in C++,
that will crack your scull trying to avoid them,
especially in async environment.
The secret, of course, is good up front design.
Blah, blah, blah.
That is NOT how real world works.
Do you think you waste half a year on your great "design"
in real life?
Nope. You have a functionality requirements in most situations
I had to deal with. The "design" is driven by the sales and
marketing department.
"We want this and this and that".
And we want it NOW.
Actually, we want it YESTERDAY.
You have about 2 weeks to do a job that should normally
take at least 2 months to a competent programmer.
And THAT is how it works in the silicon valley at least.
I have NEVER EVER seen a SINGLE company that does
"good design". I am not even sure they know what it means
to begin with.
Because it is so expensive that most of them can not even
BEGIN thinking in those terms.
It's true that
it requires more work to avoid all leaks in C than it does in
C++, and (slightly) more work in C++ than in Java,
WHAT?
Is this some kind of insult?
SLIGHTLY more than in Java?
I would defietely like to see some substantiation of this
argument, even though I already know the end of the story.
but if you're
organization doesn't do good up front design
And WHICH organization does that kind of thing?
You mean Intel?
- nah.
You mean HP?
- nah
You mean SGI?
- nah
You mean Fujitsu?
- nah.
You man Amdahl, for god's sake, the competitors of no one
less than IBM?
- nah.
WHO then?
Well, no one I know of. Sorry to tell you.
"Good design" is the kind of crap they brainwash you wit
at computer school.
It does not exist, unless you do your own project at home
and could care less how much is it going to "cost" you
or hoe much time is it going to take.
But, putting aside the fun part of it,
it is actually a FUNDAMENTAL issue.
What is "good design" by definition?
Well, it is something that does not exist!
Why?
Well, becaue how do you create something?
Well, initially you have some idea to create a product that
does this and that.
At THAT stage, you only have a pin sized view on ALL sorts
of issues that may or may not arise as you start implementing
some of this stuff and start seeing the situations you did
not even expect to have when you had your initial idea.
Yes, no question about it. If you are "smart" and have enough
of experience and are not simply hacking away at it, you DO
know how to walk the mine field and how to avoid the most of
pitfalls. But that is the ART part of programming. Not science,
necessarily.
If you do appreciate beauty and to appreciate the music,
as it is about the most abstract and most fundamental aspect
of comprehending what the structures are, then yes, you will
be able to design and code your stuff much better than if you
don't.
But...
At the end, what is FOREVER happening is that you never know
what is going to happen tomorrow. You may be just called into
a salesman's office and he will tell you something that will
make your hairs raise.
You may never know how exactly do you want to make some of
your GUI panels look like and what kind of program parameters
do you want to expose to user, or what kind of logging system
you are going to implement.
Are you going to waste some heavy duty time to write some
"general purpose, omnipotent logging system" in ANY project?
Why?
How much is it going to cost you?
How portable is it going to be if in one environment you might
have log4j and in another evnironment there is no such a thing?
And what about robustness?
Do you know all the exceptions in your future project?
Oh, you mean you are going to take a two year sabbatical
and lay back on some beach on one of the virgin islands
and do your great design there?
Do you even begin to comprehend what good design is worth?
Well, first of all, it has to be done by the systems architect
level guys, just to make sure.
How much do you pay those guys?
Well, at LEAST $150/hr. and that is a CHEAP one.
Can you multiply some numbers?
Well, I can tell you witout calculator, for any more or less
complex program that is even worth mentioning, you are talking
high end 5 or 6 figure numbers.
Simple as that.
And that is for a SINGLE person.
And on and on and on.
(and design and code reviews,
Screw those. About the sickest idea I had to deal with.
WHO is going to "review" YOUR code?
They are upto hilt with THEIR code?
WHAT kind of thing they are going to do on YOUR code?
Well, they are going to find some trick and some totally
meainingless lil piece of crap they can find, just to
discredit you and make you feel guilty. So everyone on
a "code review panel" could see how "great" THEY are,
not you.
Do you understand?
If you call me on a code review to see someone elses code,
how much of a chance do I have to see some of the most
subtle tricks in HIS code by simply looking at it for a half
an hour, which is what you have most of the time?
How much time do you think I have to review YOUR code
and then someone elses code, and then someone elses?
You want me to spend half of my time doing YOUR "code review"?
Are you a lunatic?
Do you live in a world of pipe dreams?
Yes, AND unit tests,
AND system tets,
and ANY tests you can imagine.
The more, the better.
and all the rest), you will have
problems (and Java won't solve them),
Are you a pervert by ANY chance?
What I see is this sadistic pleasure on your end.
You see, if you said such a thing on a java group,
they'd tare you to pieces, if they even cared to bother
about it. At best, they'd make you look like fool,
and laugh their arses off you.
You must be some "moron" to them if you even conceive
saying things like that. Becaues this is about the highest
order insult to Java as a concept.
Because these things are some of the CENTRAL concepts
in the whole Java world.
and if it does, then you won't have memory leaks.
Who does what?
Java won't solve them?
And if it does, you won't have memory leaks?
What kind of logic is this, sire?
Never heard of anything like this?
Too much work without sleep?
The advantage of things like garbage
collection isn't that they solve problems you wouldn't solve
otherwise---
Bullshit.
That is EXACTLY what they are meant to do.
Otherwise, just hack away, trying to waste upto 10-30% of your
time forever worrying about memory deallocation issues.
Oh, you mean that "smart pointer" paradise?
Well, sorry to tell you, I haven't looked at that thing yet,
and unfortunately have no plans to ever do.
But... I do have my reservations about it.
Not that I do not trust what you are saying outright.
Oh yes they do.
I have have as hard of an evidence as it gets
and I process such immense amounts of data and so many different
allocations of so many different object types, that I am not sure
it is going to be easy for you something even more stretching,
unless you are a major world bank or a government, doing billions
of records processing.
The only advantage is that they reduce
the total amount of work necessary in the solution.
Not only reduce. They simply eliminate it.
Yes, I do prefer to explicitly deallocate most of objects
by resetting the "pointers" to null so that gc kicks in as
soon as it can.
Because, first of all, I have such amounts of allocated memory,
that even after I stop some major operation, there may be tens
if not hundreds of megs still sitting idle and not being
deallocaed by gc because i am holding those pointers.
And the reason I am holding those pointers is that if you are
interested in looking at some of your results in various program
dialogs, you still have ALL of the most important information
available, even AFTER the operation is totally completed
and success status has been reported, shown to you in your active
dialog and logged into a perpetual rotating log file, down to
quite a minute details of your entire operation or a job,
within the reasonable granularity.
The same as what?
Well, again. Sorry to tell you. But I have not looked at an
issue of "smart pointers". I can not argue this one.
ALL I can say: great.
If I ever have time, I'll look at it and may be, you never
know, may be, will find some time to convert my code to use the
"smart pointer". Even though I doubt very much I'll be able
to find some time to waste on this. Because by now it is
quite acceptable as is and I am about the only one, who is
even aware of this issue. Because it does not matter at all.
You can run this program for years and as long as you don't
restart it, you won't have the memory leaks. Well, I AM a
bit stretching it, just for the sake of arugument.
Just to show the relative significance of it.
And, if you are going to tell me that there is no memory
leaks problems with C++, or they are not much more difficult
to handle than in Jave, sorry, I do have my reservations.
And that's the kind of comment which gives C++ a bad name.
There are no silver bullets, and in well designed code, you
typically don't have that many smart pointers.
Well, I am quite happy with my design. But there are intricacies.
(Although I'm
pretty sure it depends on the application domain---there are
probably domains where boost::shared_ptr solves 90% of your
memory management problems.
Sorry, I don't want to hear the word boost, unless it is
totally compatible with Visual Studio compiler without me
moving a finger more or less.
What I DO want to hear is the trick you are going to show
me that will make the memory leaks go away by using a
standard, off the shelf environment under windows,
and the name of that envirionment is Visual Studio.
That is ALL I want to hear, no matter how great anybody's
compiler or development environment is.
Sorry to tell you this.
I've certainly not worked in such a
domain, however.)
Too bad. Try to deal with driver related issues.
That helps.
I did not say infinite.
But, I DO have an option of keeping some information
for a more or less infinite time, considering the scope
of practical events. After all, what was the oldest packet
are you interested in seeing trying to check when was the
last time your box was attacked or you saw some funky packet
just now and are interested in seeing when was the last time
you might see it before?
Well, not more than 24 hrs. more or less.
And that is "infinite" in the scheme of things.
Now, I DO have that packet held for you and you CAN look at
it in the packet display window and you can take all the time
you want to do that kind of thing. And I will hold that and
ANY other fishy packet for you at least for hours.
And THAT is the kind of design I like.
I mean TOTAL flexibility. As much as total can be applied
to anything on the physical domain.
The classical case of a leak is when you "register" a session in
a map, mapping it's id to the session object, and forget to
deregister when the session ends. At least, something along
those lines accounts for most of the leaks I've seen.
Note that this is very application dependent.
Yes, no argument on that one.
Actual allocation
with a copying collector is about the same as alloca in terms of
performance, and if you can arrange for all of the runs of the
garbage collector to be in otherwise dead time, you'll likely
significantly outperform a system using manual allocation.
You don't "arrange" ANYTHING as far as garbage collector goes.
It does its thing and you can not even force to garbage collect
even though there is a call to gc(). But that call is not
guaranteed to garbage collect. It is hist a hint to gc that
it MAY collect this stuff.
GC is a pretty sophisticated algorithm.
I have not looked at the source, but from what I heard about
it from the top level experts, it IS definetely an impressive
piece of code and I am glad it is there.
I did watch the memory deallocation issue quite carefully.
Becaues I am dealing with terabytes of data allocation/deallocation
in millions of allocations. So, if ANYTHING just sneases,
this whole thing is going to get screwed up beyond belief
and I may loose so much time, that I am not even interested
in tell you how much this is. It may turn out to be days,
if not weeks if something fishy happens.
But
there are a couple of if's in there---for most applications I've
seen, the performance difference between garbage collection and
manual allocation won't be significant, and there are
applications where garbage collection is significantly slower,
or simply can't be used.
I do not agree on that one.
From what I saw and tested and verified and a pretty memory
intensive situations is that I can hardly even notice gc exists.
First of all, what do you think happens if you deallocate memory
manually? Well, what happens on the O/S level, this buffer has
to be returned to a memory pool. If this buffer happens to be
an ajacent to some other memory region, it has to be merged
so the allocator can find the best fit existing buffer.
Which means? Well, which means that no matter wether you have
gc or not, the underlying mechanisms are pretty much similar
to what gc does and the net effect as far as overall system
performance is not that significant. Probably in the range
of 5% of your performance, even though I am pulling this
number out of the hat.
I'll repeat: there's no silver bullet. And I've yet to see a
smart pointer which solved all of the memory management issues
without significant additional work.
Oh, I see. So those "smart pointers" are not exactly the
kind of magic you guys are selling it to be?
Well, that is exactly what I suspected before I even looked at it.
In fact, I do not believe in this magic to the least.
Why did you stip my response?
One more time: YES, there IS such a thing!
For any reasonable definition of "portable". (No GUI code can
be made to work on an embedded system without a terminal,
Huh?
Ok. good enough for now?
for
example.) One of the advantages of Java is that Swing is
actually fairly well designed. From what little I've seen of
the C++ GUI libraries (where unlike Java, you do have a choice),
they weren't as well designed. But they're likely sufficient
for a lot of applications---I know that I use GUI applications
(e.g. Firefox) which are written in C or C++, and they more or
less work.
--
Programmer's Goldmine collections:
http://preciseinfo.org
Tens of thousands of code examples and expert discussions on
C++, MFC, VC, ATL, STL, templates, Java, Python, Javascript,
organized by major topics of language, tools, methods, techniques.