Hi all. I thought I had a pretty good grasp of Python's scoping rules, but
today I noticed something that I don't understand. Can anyone explain to me
why this happens?
x = 'local'
class C:
y = x
return C.y
x = 'local'
class C:
x = x
return C.x
'global'
Start by comparing the disassembly of the two class bodies:
dis.dis(f1.__code__.co_consts[2])
3 0 LOAD_NAME 0 (__name__)
3 STORE_NAME 1 (__module__)
6 LOAD_CONST 0 ('f1.<locals>.C')
9 STORE_NAME 2 (__qualname__)
4 12 LOAD_CLASSDEREF 0 (x)
15 STORE_NAME 3 (y)
18 LOAD_CONST 1 (None)
21 RETURN_VALUE
dis.dis(f2.__code__.co_consts[2])
3 0 LOAD_NAME 0 (__name__)
3 STORE_NAME 1 (__module__)
6 LOAD_CONST 0 ('f2.<locals>.C')
9 STORE_NAME 2 (__qualname__)
4 12 LOAD_NAME 3 (x)
15 STORE_NAME 3 (x)
18 LOAD_CONST 1 (None)
21 RETURN_VALUE
The only significant difference is that the first uses
LOAD_CLASSDEREF, which I guess is the class version of LOAD_DEREF for
loading values from closures, at line 4 whereas the second uses
LOAD_NAME. So the first one knows about the x in the nonlocal scope,
whereas the second does not and just loads the global (since x doesn't
yet exist in the locals dict).
Now why doesn't the second version also use LOAD_CLASSDEREF? My guess
is because it's the name of a local; if it were referenced a second
time in the class then the second LOAD_CLASSDEREF would again get the
x from the nonlocal scope, which would be incorrect.