By the way Andrew, the timestamps on your emails appear to be off, or
possibly the time zone. Your posts are allegedly arriving before the
posts you reply to, at least according to my news client.
No, I don't think it big and complicated. I do think it has timing
implications which are undesirable because of how *much* slices are
used. In an embedded target -- I have to optimize; and I will have to
reject certain parts of Python to make it fit and run fast enough to be
useful.
Then I look forward to seeing your profiling results that show that the
overhead of subclassing list is the bottleneck in your application.
Until then, you are making the classic blunder of the premature optimizer:
"More computing sins are committed in the name of efficiency (without
necessarily achieving it) than for any other single reason — including
blind stupidity." — W.A. Wulf
I am not impressed by performance arguments when you have (apparently)
neither identified the bottlenecks in your code, nor even measured the
performance. You are essentially *guessing* where the bottlenecks are,
and *hoping* that some suggested change will be an optimization rather
than a pessimization.
Of course I may be wrong, and you have profiled your code and determined
that the overhead of inheritance is a problem. If so, that's a different
ball game. But your posts so far suggest to me that you're trying to
predict performance optimizations rather than measure them.
Ahh -- The I don't know approach! It's only unexpected if one is a bad
programmer...!
No, it is unexpected because once you start relying on monkey-patching,
and the libraries you install are monkey-patching, you have a
combinational explosion of interactions. Any piece of code, anywhere,
could monkey-patch any other piece of code -- it is a form of action-at-a-
distance coding, like GOTOs and global variables. Use it with caution, in
moderation.
Let me see if I can illustrate a flavour of the sort of things that can
happen if monkey-patching built-ins were allowed.
[...]
Right, which means that people developing the libraries made
contradictory assumptions.
Not necessarily. Not only can monkey-patches conflict, but they can
combine in bad ways. It isn't just that Fred assumes X and Barney assumes
not-X, but also that Fred assumes X and Barney assumes Y and *nobody*
imagined that there was some interaction between X and Y.
[...]
I read that post carefully; and the author purposely notes that he is
exaggerating.
Not carefully enough. He notes that he was using a provocative title and
that he doesn't actually think that Ruby is being destroyed. But the
actual harm he describes is real, e.g. bugs that take months to track
down.
What you are talking about is namespace preservation;
I haven't mentioned namespaces. Nothing I have said has anything to do
with namespaces. I remember Apple monkey-patching routines in ROM back in
the mid 1980s, long before there was anything corresponding to namespaces
in Apple's programming model.
and I am thinking
about it. I can preserve it -- but only if I disallow true Python
primitives in my own interpreter; I can't provide two sets in the memory
footprint I am using.
If you want to write a language that is not Python, go right ahead.
If someone had a clear explanation of the disadvantages of allowing an
iterator, or a tuple -- in place of a slice() -- I would have no qualms
dropping the subject. However, I am not finding that yet. I am finding
very small optimization issues...
Python has certain public interfaces. If your language does not support
those public interfaces, then it might be an awesome language, but it is
not Python.
Python slices have certain features:
* they can be used repeatedly;
* they have public attributes start, step and stop;
* The stop attribute can be None, and the slice will default to the
length of the thing being sliced, which is not known until call-time.
* Slices have a public indices method.
These are known characteristics of slices, and there is Python code that
relies on it. If your language does not support these, then it is not
Python.
Iterators cannot replace slices, because once you have looked up an
iterator value, that value is gone, never to be seen again.
Tuples cannot replace slices, because tuples do not have start, step and
stop attributes; most tuples have no need of them, and if you care about
memory footprint, the last thing you want is to give every tuple three
unused or meaningless named attributes. Besides, what values should they
take in an empty tuple?
xrange cannot replaces slices, because there is no way to create an xrange
object with an empty or missing end; besides, xrange objects have no
public start/stop/step attributes.
And none of them have a public indices method.
The actual *need* for a slice() object still hasn't been demonsrated.
Hell, the actual *need* for slicing hasn't been demonstrated! Or Python!
Since C doesn't have slicing, nobody needs it! Am I right?
Needed or not, that is what Python has, so if your language doesn't have
it, it isn't Python.
I
am thinking that the implementation of __getitem__() is very poor
probably because of legacy issues.
You haven't demonstrated that it is very poor.
A tuple can also hold None, so ( 1, None, 2 ) is still a valid Tuple.
Sure. And a tuple can also be () or (1, "x", [], None, None, None, None).
Now your list.__getitem__ code has to deal with those instead.
Alternately: An iterator, like xrange(), could be made which takes None
as a parameter, or a special value like 'inf'.
So you want to get rid of slices because you want to save every byte of
memory you can, and your way to do that is to introduce a new, *heavier*
type that does more than slices?
Oh man, that's funny.