Yes, because as long as you don't understand it, you are in for unpleasant
surprises.
Well if someone explains what is wrong about my understanding, I
certainly care about that (although I confess to sometimes being
impatient) but someone just stating he is not sure I understand?
No. It will always use the same search order.
So if I understand you correctly in code like:
c.d = a
b = a
All three names are searched for in all scopes between the local en global
one. That is what I understand with your statement that [python] always
uses the same search order.
My impression was that python will search for c and a in the total current
namespace but will not for b.
But a variable that is bound
inside the function (with an asignment) and is not declared global, is in
the local namespace.
Aren't we now talking about implementation details? Sure the compilor
can set things up so that local names are bound to the local scope and
so the same code can be used. But it seems somewhere was made the
decision that b was in the local scope without looking for that b in
the scopes higher up.
Let me explain a bit more. Suppose I'm writing a python interpreter
in python. One implemantation detail is that I have a list of active
scopes which are directories which map names to objects. At the
start of a new function the scope list is adapted and all local
variables are inserted to the new activated scope and mapped to
some "Illegal Value" object. Now I also have a SearchName function
that will start at the begin of a scope list and return the
first scope in which that name exists. The [0] element is the
local scope. Now we come to the line "b = a"
This could be then executed internally as follows:
LeftScope = SearchName("b", ScopeList)
RightScope = SearchName("a", ScopeList)
LeftScope["b"] = RightScope["a"]
But I don't have to do it this way. I already know in which scope
"b" is, the local one, which has index 0. So I could just as well
have that line exucuted as follows:
LeftScope = ScopeList[0]
RightScope = SearchName("a", ScopeList)
LeftScope["b"] = RightScope["a"]
As far as I understand both "implementations" would make for
a correct execution of the line "b = a" and because of the
second possibility, b is IMO not conceptually searched for in
the same way as a is searched for, although one could organise
things that the same code is used for both.
Of course it is possible I completely misunderstood how python
is supposed to work and the above is nonesense in which case
I would appreciate it if you correct me.
Maybe it would have been nice if variables could have been declared as
nested, but I think it shows that nested variables have to be used with
care, similar to globals. Especially not allowing rebinding in intermediate
scopes is a sound principle (`Nested variables considered harmful').
If you need to modify the objects which are bound to names in intermediate
scopes, use methods and give these objects as parameters.
But shouldn't we just do programming in general with care? And if
Nested variables are harmfull, what is then the big difference
between rebinding them and mutating them that we should forbid
the first and allow the second?
I understand that python evolved and that this sometimes results
in things that in hindsight could have been done better. But
I sometimes have the impression that the defenders try to defend
those results as a design decision. With your remark above I have
to wonder if someone really thought this through at design time
and came to the conclusion that nested variables are harmfull
and thus may not be rebound but not that harmfull so mutation
is allowed and if so how he came to that conclusion.
AP> Suppose someone is rather new of python and writes the following
AP> code, manipulating vectors:
AP> A = 10 * [0]
AP> def f(B):
AP> ...
AP> for i in xrange(10):
AP> A += B
AP> ...
AP> Then he hears about the vector and matrix modules that are around.
AP> So he rewrites his code naively as follows:
AP> A = NullVector(10):
AP> def f(B):
AP> ...
AP> A += B
AP> ...
AP> And it won't work. IMO the principle of least surprise here would
AP> be that it should work.
Well, A = f(B) introduces a local variable A. Therefore also A = A+B.
And therefore also A += B.
The only way to have no surprises at all would be if local variables would
have to be declared explicitly, like 'local A'. But that would break
existing code. Or maybe the composite assignment operators should have been
exempted. Too late now!
Let me make one thing clear. I'm not trying to get the python people to
change anything. Most I hope for is that they would think about this
behaviour for python 3000.
You have the same problem with:
A = [10]
def inner():
A.append(2)
works but
A = [10]
def inner():
A += [2]
doesn't. The nasty thing in this example is that although A += [2] looks
like a rebinding syntactically, semantically there is no rebinding done.
I think the cleaner solution is (use parameters and don't rebind):
def inner(A):
A.append(2)
For what it is worth. My proposal would be to introduce a rebinding
operator, ( just for the sake of this exchange written as := ).
A line like "b := a", wouldn't make b a local variable but would
search for the name b in all active scopes and then rebind b
there. In terms of my hypothetical interpreter something like
the above.
LeftScope = SearchName("b", ScopeList)
RightScope = SearchName("a", ScopeList)
LeftScope["b"] = RightScope["a"]
With the understanding that b wouldn't be inserted in the local
scope unless there was an assignment to be somewhere else in
the function.
The augmented assignments could then be redefined in terms of the
rebinding operator instead of the assignment.
I'm not sure this proposal would eliminate all surprises but as
far as I can see it wouldn't break existing code. But I don't think
this proposal would have a serious chance.
In any case thanks for your contribution.