You say that as if writing "an entire class" was a big complicated
effort. It isn't. It is trivially simple, a single line:
class MyList(list):
...
No, I don't think it big and complicated. I do think it has timing
implications which are undesirable because of how *much* slices are used.
In an embedded target -- I have to optimize; and I will have to reject
certain parts of Python to make it fit and run fast enough to be useful.
What part of "unexpected" is unclear?
Ahh -- The I don't know approach! It's only unexpected if one is a bad
programmer...!
Let me see if I can illustrate a flavour of the sort of things that can
happen if monkey-patching built-ins were allowed.
You create a list and print it:
# simulated output
py> x = [5, 2, 4, 1]
py> print(x)
[1, 2, 4, 5]
<snip>
Finally you search deep into the libraries used in your code, and *five
days later* discover that your code uses library A which uses library B
which uses library C which uses library D which installs a harmless
monkey-patch to print, but only if library E is installed, and you just
happen to have E installed even though your code never uses it, AND that
monkey-patch clashes with a harmless monkey-patch to list.__getitem__
installed by library F. And even though each monkey-patch alone is
harmless, the combination breaks your code's output.
Right, which means that people developing the libraries made
contradictory assumptions.
Python allows, but does not encourage, monkey-patching of code written in
pure Python, because it sometimes can be useful. It flat out prohibits
monkey-patching of builtins, because it is just too dangerous.
Ruby allows monkey-patching of everything. And the result was predictable:
http://devblog.avdi.org/2008/02/23/why-monkeypatching-is-destroying-ruby/
I read that post carefully; and the author purposely notes that he is
exaggerating.
BUT Your point is still well taken.
What you are talking about is namespace preservation; and I am thinking
about it. I can preserve it -- but only if I disallow true Python
primitives in my own interpreter; I can't provide two sets in the memory
footprint I am using.
From my perspective, the version of Python that I compile will not be
supported by the normal python help; The predecessor which first forged
this path, Pymite, has the same problems -- however, the benefits
ought-weigh the disadvantages; and the experiment yielded useful
information on what is redundant in Python (eg: range is not supported)
and when that redundancy is important for some reason.
If someone had a clear explanation of the disadvantages of allowing an
iterator, or a tuple -- in place of a slice() -- I would have no qualms
dropping the subject. However, I am not finding that yet. I am finding
very small optimization issues...
The size of an object is at least 8 bytes. Hence, three numbers is
going to be at least 24 bytes; and that's 24 bytes in *excess* of the
size of slice() or tuple () which are merely containers. So -- There
*ARE* savings in memory when using slice(), but it isn't really 2x
memory -- its more like 20% -- once the actual objects are considered.
The actual *need* for a slice() object still hasn't been demonsrated. I
am thinking that the implementation of __getitem__() is very poor
probably because of legacy issues.
A tuple can also hold None, so ( 1, None, 2 ) is still a valid Tuple.
Alternately: An iterator, like xrange(), could be made which takes None
as a parameter, or a special value like 'inf'.
Since these two values would never be passed to xrange by already
developed code, allowing them would not break working code.
I am only aware of one possible reason that slice() was once thought to
be necessary; and that is because accessing the element of a tuple would
recursively call __getitem__ on the tuple. But, even that is easily
dismissed once the fixed integer indexes are considered.
Your thoughts? Do you have any show stopper insights?