R
Rotwang
On 21/06/2013 19:26, Rick Johnson wrote:
[...]
I didn't ask what alternative methods of handling default
argument binding exist (I can think of several, but none
of them strikes me as preferable to how Python currently
does it). I asked what would happen in /your/ version of
Python. Which of the alternatives that you present would
have been implemented, if you had designed the language?
The apathetic approach. However, you fail to understand that
whilst Python's current implementation is partly apathetic,
is is also benevolent, and malevolent simultaneously. My
approach is purely apathetic. I'll explain later. Stay
tuned.
OK...
[...]
So how does the interpreter know whether an arbitrary
object passed as a default value is mutable or not? Not
that it really matters.
Well i'm glad it does not matter to you because it does not
matter to me either. *shrugs*
which it does
Am i just to take your word for this? You cannot provide an
example?
Of course I can:
.... def __hash__(self):
.... return hash(tuple(self))
....
[0, 1, 2, 3, 4]x = hashablelist(range(4))
x [0, 1, 2, 3]
# Let's try using x as a dict key:
d = {x: 'Hello'}
d[x] 'Hello'
# Let's try mutating it:
x.append(4)
x
Here, allow me to "break the ice":
# Literal
py> d = {[1]:2}
Traceback (most recent call last):
File "<pyshell#0>", line 1, in <module>
d = {[1]:2}
TypeError: unhashable type: 'list'
# Symbol
py> lst = [1]
py> d = {lst:2}
Traceback (most recent call last):
File "<pyshell#2>", line 1, in <module>
d = {lst:2}
TypeError: unhashable type: 'list'
Hmm, maybe only certain mutables work?
Try reading the tracebacks. Notice how they don't say anything about
mutability?
Great, more esoteric
rules! Feel free to enlighten me since i'm not going to
waste one second of my time pursuing the docs just to learn
about ANOTHER unintuitive PyWart i have no use for.
Sure. In order to be used as a dictionary key a Python has to be
hashable. That means its type has to define a __hash__ method, which is
called by the builtin function hash. The __hash__ method should return
an int, and objects that compare equal should have the same hash.
You are correct. Finally, we can agree on something.
(is this lazy readers day? I swear i explained this earlier)
And here is where you are wrong. In the current implementation
python functions carry the state of mutable default arguments
between successive calls. That's a flaw.
But your description of The Apathetic Approach doesn't say anything
about functions carrying the state of mutable default arguments, or
otherwise. How am I supposed to know how your proposed approach treats
mutable defaults if you don't tell me, even after I explicitly ask?
Observe:
py> def foo(arg=[]):
... arg.append(1)
... print(arg)
...
py> foo()
[1]
py> foo()
[1, 1]
py> foo()
[1, 1, 1]
Yes, I am well aware of how Python currently treats mutable default
arguments.
No, no, NO! That's wrong! Subroutines should be stateless.
That means that an empty mutable default argument will
ALWAYS be empty at each call of the subroutine. This is
what should happen:
py> def foo(arg=[]):
... arg.append(1)
... print(arg)
...
py> foo()
[1]
py> foo()
[1]
py> foo()
[1]
Yes, Yes, YES! That is intuitive! That is sane! Now, what if
we pass a reference to a mutable object? What then. Well, let's
see:
py> lst = range(5)
py> lst
[0, 1, 2, 3, 4]
py> def foo(arg=lst):
... arg.append(1)
... print(arg)
...
py> foo()
[0, 1, 2, 3, 4, 1]
py> foo()
[0, 1, 2, 3, 4, 1, 1]
That's fine. Because the object was already created OUTSIDE
the subroutine. So therefore, modifications to the mutable
are not breaking the fundamental of statelessness INSIDE
subroutines. The modification is merely a side effect, and
the subroutine is unconcerned with anything that exists
beyond it's own scope.
IS ALL THIS REGISTERING YET? DO YOU UNDERSTAND?
No, I don't. These two special cases are not sufficient for me to
determine what semantics you are proposing for the general case. For
example, what happens in the second example if lst is rebound? Does the
default stay the same or does it change to the new value of lst? What
about if you pass a call as a default argument, and then subsequently
change the behaviour of the callable? Does the argument get re-evaluated
every time foo() is called, or is the argument guaranteed to be the same
every time? If the latter, what happens if the arguments type is
modified (e.g. by changing one of its class attributes)? What about
defining functions dynamically, with default arguments that are only
known at runtime? Is there any way to avoid the second type of behaviour
in this case? If so, how? If not, isn't that likely to prove at least as
big a gotcha to people who don't know the rules of RickPy as the problem
you're trying to solve?
My buddies? This design flaw is NOT my brain child. Your
barking up the wrong tree pal.
But I didn't ask about Python's current approach, which I already
understand quite well. I was asking about what your approach would be.
If this is not your brainchild then why were you telling me about it?
I explained this to MRAB already.
But you haven't said how you test an object for mutability.
Nice attempt at sleight of hand but your logic is clumsy.
Your trying to argue that my use of a "custom callable state
object" (to avoid the esoteric and unintuitive nature of the
current implementation of Python "mutable function
arguments") is somehow only a mere reproduction of the
function behavior and has no positive benefits, when in
fact, it has a HUGE direct benefit:
* AVOIDING A FLAW IN THE LANGUAGE
But it doesn't avoid a flaw in the language, unless you get rid of
function defaults altogether. As long as functions can have default
arguments, people are going to try passing mutable ones. Avoiding the
alleged "flaw" in the language means proposing a different semantics for
when they do, not just explaining how the current behaviour can be
reproduced in other ways (which nobody AFAICR has denied).
It also has quite a few positive side effects:
* CODE ENCAPSULATION
* INTERFACE
* USING THE CORRECT TOOL FOR THE JOB++
* READABILITY
* NO HIDDEN SURPRISES
* LESS BUGGY
How much more justification do you need?
I don't need more justification, I just need you to tell me
unambiguously what alternative behaviour you're proposing. I can't
really compare the pros and cons of the two approaches if I only know
one of them.
Yes, iterating a sequence can be achieved using a "while
loop", but "for loops" should not be thrown out because they
offer a specific type of iteration that nicely complements a
while loop. Plus, for loops do not have any strange side
effects (unlike Python functions).
They exhaust iterators. Whether that is considered "strange" depends on
who is doing the considering (Python's default binding behaviour doesn't
seem strange to most people who are familiar with its data model).
They do one thing and
they do damn well! So stop picking on for loops
I want Python functions to also "do one thing and do it
well", and what is that "one thing" you ask, execute
subprograms.
The fix is simple. No more "magical Python functions", only
stateless routines.
So no generators, then?