Multi-dimensional list initialization

D

Demian Brecht

So, here I was thinking "oh, this is a nice, easy way to initialize a 4D matrix" (running 2.7.3, non-core libs not allowed):

m = [[None] * 4] * 4

The way to get what I was after was:

m = [[None] * 4, [None] * 4, [None] * 4, [None * 4]]

(Obviously, I could have just hardcoded the initialization, but I'm too lazy to type all that out ;))

The behaviour I encountered seems a little contradictory to me. [None] * 4 creates four distinct elements in a single array while [[None] * 4] * 4 creates one distinct array of four distinct elements, with three references to it:
['a', None, None, None]
m = [[None] * 4] * 4
m[0][0] = 'm'
m
[['m', None, None, None], ['m', None, None, None], ['m', None, None, None], ['m', None, None, None]]

Is this expected behaviour and if so, why? In my mind either result makes sense, but the inconsistency is what throws me off.

Demian Brecht
@demianbrecht
http://demianbrecht.github.com
 
H

Hans Mulder

So, here I was thinking "oh, this is a nice, easy way to initialize a 4D matrix"
(running 2.7.3, non-core libs not allowed):

m = [[None] * 4] * 4

The way to get what I was after was:

m = [[None] * 4, [None] * 4, [None] * 4, [None * 4]]

Or alternateively:

m = [[None] * 4 for _ in range(4)]
(Obviously, I could have just hardcoded the initialization, but I'm too
lazy to type all that out ;))

The behaviour I encountered seems a little contradictory to me.
[None] * 4 creates four distinct elements in a single array

Actually, it creates a list with four references to the same object.
But then, this object is immutable, so you won't notice that it's the
same object.
while [[None] * 4] * 4 creates one distinct array of four distinct
elements, with three references to it:

We usually phrase that as "a list with four references to the
same list". The first reference is not special in any way.
a = [None] * 4
a[0] = 'a'
a
['a', None, None, None]
m = [[None] * 4] * 4
m[0][0] = 'm'
m
[['m', None, None, None], ['m', None, None, None], ['m', None, None, None], ['m', None, None, None]]

Is this expected behaviour
Yes.

and if so, why? In my mind either result makes sense, but the
inconsistency is what throws me off.

There's no inconsistency: in both cases you get a list with four
references to the same object. The only difference is that in the
fist case, the references are to an immutable object, so the fact
that it's the same object won't hurt you.


Hope this helps,

-- HansM
 
W

wxjmfauth

Le lundi 5 novembre 2012 07:28:00 UTC+1, Demian Brecht a écrit :
So, here I was thinking "oh, this is a nice, easy way to initialize a 4D matrix" (running 2.7.3, non-core libs not allowed):



m = [[None] * 4] * 4



The way to get what I was after was:



m = [[None] * 4, [None] * 4, [None] * 4, [None * 4]]



(Obviously, I could have just hardcoded the initialization, but I'm too lazy to type all that out ;))



The behaviour I encountered seems a little contradictory to me. [None] * 4 creates four distinct elements in a single array while [[None] * 4] * 4 creates one distinct array of four distinct elements, with three references to it:


a = [None] * 4
a[0] = 'a'
a

['a', None, None, None]


m = [[None] * 4] * 4
m[0][0] = 'm'
m

[['m', None, None, None], ['m', None, None, None], ['m', None, None, None], ['m', None, None, None]]



Is this expected behaviour and if so, why? In my mind either result makessense, but the inconsistency is what throws me off.



Demian Brecht

@demianbrecht

http://demianbrecht.github.com

----------

You probably mean a two-dimensional matrix not a 4D matrix.
.... return [[val] * ncol for i in range(nrow)]
....
aa = DefMatrix(2, 3, 1.0)
aa
aa = DefMatrix(2, 3, 1.0)
aa [[1.0, 1.0, 1.0], [1.0, 1.0, 1.0]]
aa[0][0] = 3.14
aa[1][2] = 2.718
aa [[3.14, 1.0, 1.0], [1.0, 1.0, 2.718]]

bb = DefMatrix(2, 3, None)
bb [[None, None, None], [None, None, None]]
bb[0][0] = 3.14
bb[1][2] = 2.718
bb
[[3.14, None, None], [None, None, 2.718]]


jmf
 
O

Oscar Benjamin

So, here I was thinking "oh, this is a nice, easy way to initialize a 4D matrix"
(running 2.7.3, non-core libs not allowed):

m = [[None] * 4] * 4

The way to get what I was after was:

m = [[None] * 4, [None] * 4, [None] * 4, [None * 4]]

Or alternateively:

m = [[None] * 4 for _ in range(4)]

That's the way to do it.

I've seen this question many times between here and the python-tutor
list. It does seem to be a common gotcha.

I was just thinking to myself that it would be a hard thing to change
because the list would need to know how to instantiate copies of all
the different types of the elements in the list. Then I realised it
doesn't. It is simply a case of how the list multiplication operator
is implemented and whether it chooses to use a reference to the same
list or make a copy of that list. Since all of this is implemented
within the same list type it is a relatively easy change to make
(ignoring backward compatibility concerns).

I don't see this non-copying list multiplication behaviour as
contradictory but has anyone ever actually found a use for it?


Oscar
 
C

Chris Angelico

I was just thinking to myself that it would be a hard thing to change
because the list would need to know how to instantiate copies of all
the different types of the elements in the list. Then I realised it
doesn't. It is simply a case of how the list multiplication operator
is implemented and whether it chooses to use a reference to the same
list or make a copy of that list. Since all of this is implemented
within the same list type it is a relatively easy change to make
(ignoring backward compatibility concerns).

I don't see this non-copying list multiplication behaviour as
contradictory but has anyone ever actually found a use for it?

Stupid example of why it can't copy:

bad = [open("test_file")] * 4

How do you clone something that isn't Plain Old Data? Ultimately,
that's where the problem comes from. It's easy enough to clone
something that's all scalars (strings, integers, None, etc) and
non-recursive lists/dicts of scalars, but anything more complicated
than that is rather harder.

If you want a deep copy and are prepared to handle any issues that
might result, you can do this:
import copy
a=[[2,3,4]]
a.extend(copy.deepcopy(a))
a[0][1]=10
a
[[2, 10, 4], [2, 3, 4]]

And some things just won't work:Traceback (most recent call last):
File "<pyshell#17>", line 1, in <module>
bad.extend(copy.deepcopy(bad))
File "C:\Python32\lib\copy.py", line 147, in deepcopy
y = copier(x, memo)
File "C:\Python32\lib\copy.py", line 209, in _deepcopy_list
y.append(deepcopy(a, memo))
File "C:\Python32\lib\copy.py", line 166, in deepcopy
rv = reductor(2)
TypeError: cannot serialize '_io.TextIOWrapper' object

The default behaviour is safe and reliable. When you want something
other than the default, there are ways of doing it.

ChrisA
 
O

Oscar Benjamin

I was just thinking to myself that it would be a hard thing to change
because the list would need to know how to instantiate copies of all
the different types of the elements in the list. Then I realised it
doesn't. It is simply a case of how the list multiplication operator
is implemented and whether it chooses to use a reference to the same
list or make a copy of that list. Since all of this is implemented
within the same list type it is a relatively easy change to make
(ignoring backward compatibility concerns).

I don't see this non-copying list multiplication behaviour as
contradictory but has anyone ever actually found a use for it?

Stupid example of why it can't copy:

bad = [open("test_file")] * 4

How do you clone something that isn't Plain Old Data? Ultimately,
that's where the problem comes from. It's easy enough to clone
something that's all scalars (strings, integers, None, etc) and
non-recursive lists/dicts of scalars, but anything more complicated
than that is rather harder.

That's not what I meant. But now you've made me realise that I was
wrong about what I did mean. In the case of

stuff = [[obj] * n] * m

I thought that the multiplication of the inner list ([obj] * n) by m
could create a new list of lists using copies. On closer inspection I
see that the list being multiplied is in fact [[obj] * n] and that
this list can only know that it is a list of lists by inspecting its
element(s) which makes things more complicated.

I retract my claim that this change would be easy to implement.


Oscar
 
A

Andrew Robinson

I was just thinking to myself that it would be a hard thing to change
because the list would need to know how to instantiate copies of all
the different types of the elements in the list. Then I realised it
doesn't. It is simply a case of how the list multiplication operator
is implemented and whether it chooses to use a reference to the same
list or make a copy of that list. Since all of this is implemented
within the same list type it is a relatively easy change to make
(ignoring backward compatibility concerns).

I don't see this non-copying list multiplication behaviour as
contradictory but has anyone ever actually found a use for it?
Stupid example of why it can't copy:

bad = [open("test_file")] * 4

How do you clone something that isn't Plain Old Data? Ultimately,
that's where the problem comes from. It's easy enough to clone
something that's all scalars (strings, integers, None, etc) and
non-recursive lists/dicts of scalars, but anything more complicated
than that is rather harder.
That's not what I meant. But now you've made me realise that I was
wrong about what I did mean. In the case of

stuff = [[obj] * n] * m

I thought that the multiplication of the inner list ([obj] * n) by m
could create a new list of lists using copies. On closer inspection I
see that the list being multiplied is in fact [[obj] * n] and that
this list can only know that it is a list of lists by inspecting its
element(s) which makes things more complicated.

I retract my claim that this change would be easy to implement.


Oscar
Hi Oscar,

In general, people don't use element multiplication (that I have *ever*
seen) to make lists where all elements of the outer most list point to
the same sub-*list* by reference. The most common use of the
multiplication is to fill an array with a constant, or short list of
constants; Hence, almost everyone has to work around the issue as the
initial poster did by using a much longer construction.

The most compact notation in programming really ought to reflect the
most *commonly* desired operation. Otherwise, we're really just making
people do extra typing for no reason.

Further, list comprehensions take quite a bit longer to run than low
level copies; by a factor of roughly 10. SO, it really would be worth
implementing the underlying logic -- even if it wasn't super easy.

I really don't think doing a shallow copy of lists would break anyone's
program.
The non-list elements, whatever they are, can be left as reference
copies -- but any element which is a list ought to be shallow copied.
The behavior observed in the opening post where modifying one element of
a sub-list, modifies all elements of all sub-lists is never desired as
far as I have ever witnessed.

The underlying implementation of Python can check an object type
trivially, and the only routine needed is a shallow list copy. So, no
it really isn't a complicated operation to do shallow copies of lists.

:)
 
C

Chris Angelico

I really don't think doing a shallow copy of lists would break anyone's
program.

Well, it's a change, a semantic change. It's almost certainly going to
break _something_. But for the sake of argument, we can suppose that
the change could be made. Would it be the right thing to do?

Shallow copying by default would result in extremely weird behaviour.
All the same confusion would result, only instead of comparing
[None]*4 with [[None]]*4, there'd be confusion over the difference
between [[None]]*4 and [[[None]]]*4.

I don't think it would help anything, and it'd result in a lot more
work for no benefit.

ChrisA
 
A

Andrew Robinson

I really don't think doing a shallow copy of lists would break anyone's
program.
Well, it's a change, a semantic change. It's almost certainly going to
break _something_. But for the sake of argument, we can suppose that
the change could be made. Would it be the right thing to do?

Shallow copying by default would result in extremely weird behaviour.
All the same confusion would result, only instead of comparing
[None]*4 with [[None]]*4, there'd be confusion over the difference
between [[None]]*4 and [[[None]]]*4.

I don't think it would help anything, and it'd result in a lot more
work for no benefit.

ChrisA
I don't follow.
a=[ None ]*4 would give a=[ None, None, None, None ] as usual.
All four None's would be the same object, but there are automatically 4
different pointers to it.
Hence,
a[0]=1 would give a=[ 1, None, None, None ] as usual.

a=[ [None] ]*4 would give a=[ [None], [None], [None], [None] ] as usual
BUT:
a[0][0] = 1 would no longer give a=[ [1],[1],[1],[1] ] *Rather* it would
give
a=[ [1].[None].[None],[None] ]

The None objects are all still the same one, BUT the lists themselves
are different.

Again, a=[ ["alpha","beta"] * 4 ] would give:
a=[ ["alpha","beta"], ["alpha","beta"], ["alpha","beta"], ["alpha","beta"] ]

All four strings, "alpha", are the same object -- but there are 5
different lists; The pointers inside the initial list are copied four
times -- not the string objects;
But the *lists* themselves are created new for each replication.

If you nest it another time;
[[[None]]]*4, the same would happen; all lists would be independent --
but the objects which aren't lists would be refrenced-- not copied.

a=[[["alpha","beta"]]]*4 would yield:
a=[[['alpha', 'beta']], [['alpha', 'beta']], [['alpha', 'beta']],
[['alpha', 'beta']]]
and a[0][0]=1 would give [[1],[['alpha', 'beta']], [['alpha', 'beta']],
[['alpha', 'beta']]]]
rather than a=[[1], [1], [1], [1]]

Or at another level down: a[0][0][0]=1 would give: a=[[[1, 'beta']],
[['alpha', 'beta']], [['alpha', 'beta']], [['alpha', 'beta']] ]
rather than a=[[[1, 'beta']], [[1, 'beta']], [[1, 'beta']], [[1, 'beta']]]

The point is, there would be no difference at all noticed in what data
is found where in the array;
the *only* thing that would change is that replacing an item by
assignment would only affect the *location* assigned to -- all other
locations would not be affected.

That really is what people *generally* want.
If the entire list is meant to be read only -- the change would affect
*nothing* at all.

See if you can find *any* python program where people desired the
multiplication to have the die effect that changing an object in one of
the sub lists -- changes all the objects in the other sub lists.

I'm sure you're not going to find it -- and even if you do, it's going
to be 1 program in 1000's.
 
S

Steven D'Aprano

The most compact notation in programming really ought to reflect the
most *commonly* desired operation. Otherwise, we're really just making
people do extra typing for no reason.

There are many reasons not to put minimizing of typing ahead of all other
values:

* Typically, code is written once and read many times. Minimizing
typing might save you a second or two once, and then cost you many
seconds every time you read the code. That's why we tell people to
choose meaningful variable names, instead of naming everything "a"
and "b".

* Consistency of semantics is better than a plethora of special
cases. Python has a very simple and useful rule: objects should
not be copied unless explicitly requested to be copied. This is
much better than having to remember whether this operation or
that operation makes a copy. The answer is consistent:

(pardon me for belabouring the point here)

Q: Does [0]*10 make ten copies of the integer object?
A: No, list multiplication doesn't make copies of elements.

Q: How about [0.0]*10?
A: No, the elements are never copied.

Q: What if I use a singleton? Does [None]*10 try to copy?
A: No, the elements are never copied.

Q: How about things like file objects that can't be copied?
A: No, the elements are never copied.

Q: What about [[]]*10?
A: No, the elements are never copied.

Q: How about if the elements are subclasses of list?
A: No, the elements are never copied.

Q: What about other mutable objects like sets or dicts?
A: No, the elements are never copied.

Q: What about instances of custom classes?
A: No, the elements are never copied.

Q: What if they are old-style Classic classes?
A: No, the elements are never copied.

Q: What if I do some funny tricks with the metaclass?
A: No, the elements are never copied.

Q: How about on Tuesdays? I bet they're copied on Tuesdays.
A: No, the elements are never copied.



Your proposal throws away consistency for a trivial benefit on a rare use-
case, and replaces it with a bunch of special cases:

Q: What about [[]]*10?
A: Oh yeah, I forgot about lists, they're copied.

Q: How about if the elements are subclasses of list?
A: Hmmm, that's a good one, I'm not actually sure.

Q: How about if I use delegation to proxy a list?
A: Oh no, they definitely won't be copied.

Q: What about other mutable objects like sets or dicts?
A: No, definitely not. Unless people complain enough.

Q: What about instances of custom classes?
A: That's a definite maybe.

Q: How about on Tuesdays? I bet they're copied on Tuesdays.
A: Only if you're in Belgium.


Losing consistency in favour of saving a few characters for something as
uncommon as list multiplication is a poor tradeoff. That's why this
proposal has been rejected again and again and again every time it has
been suggested.

List multiplication [x]*n is conceptually equivalent to:

newlist = []
for i in range(n):
newlist.append(x)

or if you prefer a list comp:

[x for i in range(n)]

This is nice and simple and efficient. Some objects cannot be copied at
all. Copying other objects is slow and inefficient. Keeping list
multiplication consistent, and fast, is MUCH more important than making
it work as expected for the rare case of 2D arrays:

[[0]*n]*m

where there are other alternatives.

Further, list comprehensions take quite a bit longer to run than low
level copies; by a factor of roughly 10. SO, it really would be worth
implementing the underlying logic -- even if it wasn't super easy.

Copying those elements does not come for free.

It is true that list multiplication can be much faster than a list comp.
But that's because the list multiply doesn't have to inspect the
elements, copy them, or engage the iteration machinery. Avoiding copying
gives you a big saving:


[steve@ando ~]$ python3.3 -m timeit -s "x = range(1000)"
"[x for _ in range(100)]" # not copied
100000 loops, best of 3: 11.9 usec per loop

[steve@ando utilities]$ python3.3 -m timeit -s "x = range(1000)"
"[x[:] for _ in range(100)]" # copied
10000 loops, best of 3: 103 usec per loop

So there's a factor of ten difference right there. If list multiplication
had to make copies, it would lose much of its speed advantage. For large
enough lists, or complicated enough objects, it would become slower than
a list comprehension.

It would be even slower if list multiplication had to inspect each
element first and decide whether or not to copy.


I really don't think doing a shallow copy of lists would break anyone's
program.

Anyone who is currently using list multiplication with mutable objects is
expecting that they will be the same object, and relying on that fact.
Otherwise they wouldn't be using list multiplication.

You're suggesting a semantic change. Therefore they will be expecting
something different from what actually happens. Result: broken code.

It's not just mutable objects. It's also objects that can't be copied.
Result: mylist*3 used to work, now it raises an exception. And
performance issues: what used to be fast is now slow.

Even if this change was allowed, it would have to go through a multi-year
process. Python 3.3 is too late -- the absolute earliest would be Python
3.4, which is scheduled for about 18 months from now. So in Python 3.4
you could write:

from __future__ import list_multiplication_copying

to get the behaviour you want, and then in Python 3.5 it would become the
default. That's three years until it becomes the standard. Meanwhile,
there will still be millions of people using Python 2.7 or 3.2, and their
code will behave differently from your code.

Conservatively, if you write code to support three previous releases,
that means you can't use this feature until Python 3.7. So that's about
six years before it can be used widely.

If the problem being solved was big enough, this would be worth doing.
But it's not.

The non-list elements, whatever they are, can be left as reference
copies -- but any element which is a list ought to be shallow copied.

That's even worse than "list multiplication always copies". At least that
is simple and consistent, even if it isn't consistent with the rest of
the language, at least it is self-consistent. You are proposing something
much worse: special cases to remember. "Objects aren't copied, except for
lists, which are copied."

And then people will wonder why sets aren't copied, and dicts. People
will make a 2D array like so:

[[0]*5]*10

and it will work. Then they'll write this:

[{}]*5

and wonder why it doesn't work the way they expect. Consistency is *much*
more valuable than ad hoc DWIM semantics. Languages that try to Do What I
Mean somehow end up Doing What Somebody Else Meant, But Not Me.
 
I

Ian Kelly

If you nest it another time;
[[[None]]]*4, the same would happen; all lists would be independent -- but
the objects which aren't lists would be refrenced-- not copied.

a=[[["alpha","beta"]]]*4 would yield:
a=[[['alpha', 'beta']], [['alpha', 'beta']], [['alpha', 'beta']], [['alpha',
'beta']]]
and a[0][0]=1 would give [[1],[['alpha', 'beta']], [['alpha', 'beta']],
[['alpha', 'beta']]]]
rather than a=[[1], [1], [1], [1]]

Or at another level down: a[0][0][0]=1 would give: a=[[[1, 'beta']],
[['alpha', 'beta']], [['alpha', 'beta']], [['alpha', 'beta']] ]
rather than a=[[[1, 'beta']], [[1, 'beta']], [[1, 'beta']], [[1, 'beta']]]

You wrote "shallow copy". When the outer-level list is multiplied,
the mid-level lists would be copied. Because the copies are shallow,
although the mid-level lists are copied, their contents are not. Thus
the inner-level lists would still be all referencing the same list.
To demonstrate:
.... def __mul__(self, number):
.... new_list = ShallowCopyList()
.... for _ in range(number):
.... new_list.extend(map(copy, self))
.... return new_list
....
a = ShallowCopyList([[["alpha", "beta"]]])
a [[['alpha', 'beta']]]
b = a * 4
b
[[['alpha', 'beta']], [['alpha', 'beta']], [['alpha', 'beta']],
[['alpha', 'beta']]]
b[0][0][0] = 1
b [[[1, 'beta']], [[1, 'beta']], [[1, 'beta']], [[1, 'beta']]]
b[0][0] = 1
b
[[1], [[1, 'beta']], [[1, 'beta']], [[1, 'beta']]]

This shows that assignments at the middle level are independent with a
shallow copy on multiplication, but assignments at the inner level are
not. In order to achieve the behavior you describe, a deep copy would
be needed.
That really is what people *generally* want.
If the entire list is meant to be read only -- the change would affect
*nothing* at all.

The time and memory cost of the multiplication operation would become
quadratic instead of linear.
See if you can find *any* python program where people desired the
multiplication to have the die effect that changing an object in one of the
sub lists -- changes all the objects in the other sub lists.

I'm sure you're not going to find it -- and even if you do, it's going to be
1 program in 1000's.

Per the last thread where we discussed extremely rare scenarios,
shouldn't you be rounding "1 in 1000s" up to 20%? ;-)
 
S

Shambhu Rajak

Well said Steve, I agree with you...
-Shambhu

-----Original Message-----
From: Steven D'Aprano [mailto:[email protected]]
Sent: Tuesday, November 06, 2012 2:35 PM
To: (e-mail address removed)
Subject: Re: Multi-dimensional list initialization

The most compact notation in programming really ought to reflect the
most *commonly* desired operation. Otherwise, we're really just
making people do extra typing for no reason.

There are many reasons not to put minimizing of typing ahead of all other
values:

* Typically, code is written once and read many times. Minimizing
typing might save you a second or two once, and then cost you many
seconds every time you read the code. That's why we tell people to
choose meaningful variable names, instead of naming everything "a"
and "b".

* Consistency of semantics is better than a plethora of special
cases. Python has a very simple and useful rule: objects should
not be copied unless explicitly requested to be copied. This is
much better than having to remember whether this operation or
that operation makes a copy. The answer is consistent:

(pardon me for belabouring the point here)

Q: Does [0]*10 make ten copies of the integer object?
A: No, list multiplication doesn't make copies of elements.

Q: How about [0.0]*10?
A: No, the elements are never copied.

Q: What if I use a singleton? Does [None]*10 try to copy?
A: No, the elements are never copied.

Q: How about things like file objects that can't be copied?
A: No, the elements are never copied.

Q: What about [[]]*10?
A: No, the elements are never copied.

Q: How about if the elements are subclasses of list?
A: No, the elements are never copied.

Q: What about other mutable objects like sets or dicts?
A: No, the elements are never copied.

Q: What about instances of custom classes?
A: No, the elements are never copied.

Q: What if they are old-style Classic classes?
A: No, the elements are never copied.

Q: What if I do some funny tricks with the metaclass?
A: No, the elements are never copied.

Q: How about on Tuesdays? I bet they're copied on Tuesdays.
A: No, the elements are never copied.



Your proposal throws away consistency for a trivial benefit on a rare use- case, and replaces it with a bunch of special cases:

Q: What about [[]]*10?
A: Oh yeah, I forgot about lists, they're copied.

Q: How about if the elements are subclasses of list?
A: Hmmm, that's a good one, I'm not actually sure.

Q: How about if I use delegation to proxy a list?
A: Oh no, they definitely won't be copied.

Q: What about other mutable objects like sets or dicts?
A: No, definitely not. Unless people complain enough.

Q: What about instances of custom classes?
A: That's a definite maybe.

Q: How about on Tuesdays? I bet they're copied on Tuesdays.
A: Only if you're in Belgium.


Losing consistency in favour of saving a few characters for something as uncommon as list multiplication is a poor tradeoff. That's why this proposal has been rejected again and again and again every time it has been suggested.

List multiplication [x]*n is conceptually equivalent to:

newlist = []
for i in range(n):
newlist.append(x)

or if you prefer a list comp:

[x for i in range(n)]

This is nice and simple and efficient. Some objects cannot be copied at all. Copying other objects is slow and inefficient. Keeping list multiplication consistent, and fast, is MUCH more important than making it work as expected for the rare case of 2D arrays:

[[0]*n]*m

where there are other alternatives.

Further, list comprehensions take quite a bit longer to run than low
level copies; by a factor of roughly 10. SO, it really would be worth
implementing the underlying logic -- even if it wasn't super easy.

Copying those elements does not come for free.

It is true that list multiplication can be much faster than a list comp.
But that's because the list multiply doesn't have to inspect the elements, copy them, or engage the iteration machinery. Avoiding copying gives you a big saving:


[steve@ando ~]$ python3.3 -m timeit -s "x = range(1000)"
"[x for _ in range(100)]" # not copied
100000 loops, best of 3: 11.9 usec per loop

[steve@ando utilities]$ python3.3 -m timeit -s "x = range(1000)"
"[x[:] for _ in range(100)]" # copied
10000 loops, best of 3: 103 usec per loop

So there's a factor of ten difference right there. If list multiplication had to make copies, it would lose much of its speed advantage. For large enough lists, or complicated enough objects, it would become slower than a list comprehension.

It would be even slower if list multiplication had to inspect each element first and decide whether or not to copy.


I really don't think doing a shallow copy of lists would break anyone's
program.

Anyone who is currently using list multiplication with mutable objects is
expecting that they will be the same object, and relying on that fact.
Otherwise they wouldn't be using list multiplication.

You're suggesting a semantic change. Therefore they will be expecting
something different from what actually happens. Result: broken code.

It's not just mutable objects. It's also objects that can't be copied.
Result: mylist*3 used to work, now it raises an exception. And
performance issues: what used to be fast is now slow.

Even if this change was allowed, it would have to go through a multi-year
process. Python 3.3 is too late -- the absolute earliest would be Python
3.4, which is scheduled for about 18 months from now. So in Python 3.4
you could write:

from __future__ import list_multiplication_copying

to get the behaviour you want, and then in Python 3.5 it would become the
default. That's three years until it becomes the standard. Meanwhile,
there will still be millions of people using Python 2.7 or 3.2, and their
code will behave differently from your code.

Conservatively, if you write code to support three previous releases,
that means you can't use this feature until Python 3.7. So that's about
six years before it can be used widely.

If the problem being solved was big enough, this would be worth doing.
But it's not.

The non-list elements, whatever they are, can be left as reference
copies -- but any element which is a list ought to be shallow copied.

That's even worse than "list multiplication always copies". At least that
is simple and consistent, even if it isn't consistent with the rest of
the language, at least it is self-consistent. You are proposing something
much worse: special cases to remember. "Objects aren't copied, except for
lists, which are copied."

And then people will wonder why sets aren't copied, and dicts. People
will make a 2D array like so:

[[0]*5]*10

and it will work. Then they'll write this:

[{}]*5

and wonder why it doesn't work the way they expect. Consistency is *much*
more valuable than ad hoc DWIM semantics. Languages that try to Do What I
Mean somehow end up Doing What Somebody Else Meant, But Not Me.
 
P

Prasad, Ramit

Ian said:
On Tue, Nov 6, 2012 at 1:21 AM, Andrew Robinson

[snip]
See if you can find *any* python program where people desired the
multiplication to have the die effect that changing an object in one of the
sub lists -- changes all the objects in the other sub lists.

I'm sure you're not goingto find it -- and even if you do, it's going to be
1 program in 1000's.

Per the last thread where we discussed extremely rare scenarios,
shouldn't you be rounding "1 in 1000s" up to 20%? ;-)

Actually, I would be surprised if it was even 1 in 1000.
Of course, consistency makes it easier to learn and *remember*.
I value that far more than a minor quirk that is unlikely to
bother me now that I know of it. Well, at least not as long as
I do not forget my morning coffee/tea :)


~Ramit


This email is confidential and subject to important disclaimers and
conditions including on offers for the purchase or sale of
securities, accuracy and completeness of information, viruses,
confidentiality, legal privilege, and legal entity disclaimers,
available at http://www.jpmorgan.com/pages/disclosures/email.
 
A

Andrew Robinson

*ever* seen) to make lists where all elements of the outer most list
point to the same sub-*list* by reference. The most common use of the
multiplication is to fill an array with a constant, or short list of
constants; Hence, almost everyone has to work around the issue as
the initial poster did by using a much longer construction.

That's what I have seen as well. I've never seen an example where
someone wanted this behaviour.

most *commonly* desired operation. Otherwise, we're really just
making people do extra typing for no reason.

It's not so much the typing as the fact that this a common gotcha.
Apparently many people expect different behaviour here. I seem to
remember finding this surprising at first.
:) That's true as well.

level copies; by a factor of roughly 10. SO, it really would be worth
implementing the underlying logic -- even if it wasn't super easy.
copies -- but any element which is a list ought to be shallow copied.
The behavior observed in the opening post where modifying one element
of a sub-list, modifies all elements of all sub-lists is never desired
as far as I have ever witnessed.

It is a semantic change that would, I imagine, break many things in
subtle ways.
?? Do you have any guesses, how ?
trivially, and the only routine needed is a shallow list copy. So, no
it really isn't a complicated operation to do shallow copies of lists.

Yes but if you're inspecting the object to find out whether to copy it
what do you test for? If you check for a list type what about
subclasses? What if someone else has a custom list type that is not a
subclass? Should there be a dunder method for this?
No dunder methods. :)
Custom non-subclass list types aren't a common usage for list
multiplication in any event.
At present one has to do list comprehensions for that, and that would
simply remain so.

Subclasses, however, are something I hadn't considered...
I don't think it's such a simple problem.

Oscar
You made a good point, Oscar; I'll have to think about the subclassing a
bit.
:)
 
A

Andrew Robinson

Ian said:
On Tue, Nov 6, 2012 at 1:21 AM, Andrew Robinson
[snip]
See if you can find *any* python program where people desired the
multiplication to have the die effect that changing an object in one of the
sub lists -- changes all the objects in the other sub lists.

I'm sure you're not going to find it -- and even if you do, it's going to be
1 program in 1000's.
Per the last thread where we discussed extremely rare scenarios,
shouldn't you be rounding "1 in 1000s" up to 20%? ;-)
:D -- Ian -- also consider that I *am* willing to use extra memory.
Not everything can be shrunk to nothing and still remain functional. :)
So, it isn't *all* about *micro* optimization -- it's also about
psychology and flexibility.
Actually, I would be surprised if it was even 1 in 1000.
Of course, consistency makes it easier to learn and *remember*.
I value that far more than a minor quirk that is unlikely to
bother me now that I know of it. Well, at least not as long as
I do not forget my morning coffee/tea :)
But, having it copy lists -- when the only purpose of multiplication is
for lists;
is only a minor quirk as well.
 
A

Andrew Robinson

There are many reasons not to put minimizing of typing ahead of all other
values:
I didn't. I put it ahead of *some* values for the sake of practicality
and human psychology.
" Practicality beats purity. "
* Typically, code is written once and read many times. Minimizing
typing might save you a second or two once, and then cost you many
seconds every time you read the code. That's why we tell people to
choose meaningful variable names, instead of naming everything "a"
and "b".
Yes. But this isn't going to cost any more time than figuring out
whether or not the list multiplication is going to cause quirks,
itself. Human psychology *tends* (it's a FAQ!) to automatically assume
the purpose of the list multiplication is to pre-allocate memory for the
equivalent (using lists) of a multi-dimensional array. Note the OP even
said "4d array".

The OP's original construction was simple, elegant, easy to read and
very commonly done by newbies learning the language because it's
*intuitive*. His second try was still intuitive, but less easy to read,
and not as elegant.
* Consistency of semantics is better than a plethora of special
cases. Python has a very simple and useful rule: objects should
not be copied unless explicitly requested to be copied. This is
much better than having to remember whether this operation or
that operation makes a copy. The answer is consistent:
Bull. Even in the last thread I noted the range() object produces
special cases.
range(0,5)[1] 1
range(0,5)[1:3] range(1, 3)

The principle involved is that it gives you what you *usually* want; I
read some of the documentation on why Python 3 chose to implement it
this way.
(pardon me for belabouring the point here)

Q: Does [0]*10 make ten copies of the integer object?
A: No, list multiplication doesn't make copies of elements.
Neither would my idea for the vast majority of things on your first list.

Q: What about [[]]*10?
A: No, the elements are never copied.

YES! For the obvious reason that such a construction is making mutable
lists that the user wants to populate later. If they *didn't* want to
populate them later, they ought to have used tuples -- which take less
overhead. Who even does this thing you are suggesting?!
a=[[]]*10
a [[], [], [], [], [], [], [], [], [], []]
a[0].append(1)
a
[[1], [1], [1], [1], [1], [1], [1], [1], [1], [1]]

Oops! Damn, not what anyone normal wants....
Q: How about if the elements are subclasses of list?
A: No, the elements are never copied.
Another poster brought that point up -- it's something I would have to
study before answering.
It's a valid objection.
Q: What about other mutable objects like sets or dicts?
A: No, the elements are never copied.
They aren't list multiplication compatible in any event! It's a total
nonsense objection.

If these are inconsistent in my idea -- OBVIOUSLY -- they are
inconsistent in Python's present implementation. You can't even
reference duplicate them NOW.
Traceback (most recent call last):
Q: How about on Tuesdays? I bet they're copied on Tuesdays.
A: No, the elements are never copied.
That's really a stupid objection, and everyone knows it.
" Although that way may not be obvious at first unless you're Dutch. "
Your proposal throws away consistency for a trivial benefit on a rare use-
case, and replaces it with a bunch of special cases:
RARE!!!! You are NUTS!!!!
Q: What about [[]]*10?
A: Oh yeah, I forgot about lists, they're copied. Yup.

Q: How about if the elements are subclasses of list?
A: Hmmm, that's a good one, I'm not actually sure.

Q: How about if I use delegation to proxy a list?
A: Oh no, they definitely won't be copied.
Give an example usage of why someone would want to do this. Then we can
discuss it.
Q: What about other mutable objects like sets or dicts?
A: No, definitely not. Unless people complain enough.
now you're just repeating yourself to make your contrived list longer --
but there's no new objections...
Losing consistency in favour of saving a few characters for something as
uncommon as list multiplication is a poor tradeoff. That's why this
proposal has been rejected again and again and again every time it has
been suggested.
Please link to the objection being proposed to the developers, and their
reasoning for rejecting it.
I think you are exaggerating.
List multiplication [x]*n is conceptually equivalent to:
<snip>
This is nice and simple and efficient.
No it isn't efficient. It's *slow* when done as in your example.
Copying other objects is slow and inefficient. Keeping list
multiplication consistent, and fast, is MUCH more important than making
it work as expected for the rare case of 2D arrays:
I don't think so -- again, look at range(); it was made to work
inconsistent for a "common" case.

Besides, 2D arrays are *not* rare and people *have* to copy internals of
them very often.
The copy speed will be the same or *faster*, and the typing less -- and
the psychological mistakes *less*, the elegance more.

It's hardly going to confuse anyone to say that lists are copied with
list multiplication, but the elements are not.

Every time someone passes a list to a function, they *know* that the
list is passed by value -- and the elements are passed by reference.
People in Python are USED to lists being "the" way to weird behavior
that other languages don't do.
Copying those elements does not come for free.

It is true that list multiplication can be much faster than a list comp.
But that's because the list multiply doesn't have to inspect the
elements, copy them, or engage the iteration machinery. Avoiding copying
gives you a big saving:


[steve@ando ~]$ python3.3 -m timeit -s "x = range(1000)"
"[x for _ in range(100)]" # not copied
100000 loops, best of 3: 11.9 usec per loop

[steve@ando utilities]$ python3.3 -m timeit -s "x = range(1000)"
"[x[:] for _ in range(100)]" # copied
10000 loops, best of 3: 103 usec per loop

So there's a factor of ten difference right there. If list multiplication
had to make copies, it would lose much of its speed advantage.
And when multiplication doesn't make copies of *lists*, it's going
"nowhere fast", because people don't want the results that gives.

So what difference does it make? People won't make the construction
unless they wanted to make the copies in the first place. If they want
the copies, well -- copies are *slow*. Big deal.
For large
enough lists, or complicated enough objects, it would become slower than
a list comprehension.
Huh? You're nuts.
It would be even slower if list multiplication had to inspect each
element first and decide whether or not to copy.
A single pointer comparison in a 'C' for loop takes less than 5 nano
seconds on a 1Ghz machine.
(I'll bet yours is faster than that...!)
Consider: list objects have a pointer which points back to the generic
list object -- that's all it takes to determine what the "type" is.

Your measured loop times, doing list comprehensions takes over 10
microseconds *per loop*.
Compared to what you're proposing -- The pointer compare is a mere 0.05%
change; You can't even measure that with "timeit!". BUT: The increase
in speed for not running tokenized "for" loops is *much* bigger than the
loss for a single pointer compare; so it will *usually* be a *serious*
net gain.
Anyone who is currently using list multiplication with mutable objects is
expecting that they will be the same object, and relying on that fact.
Otherwise they wouldn't be using list multiplication.
yes, and I'm not changing that -- except for lists; and *no* one is
using that.
Find two examples of it from existing non contrived web examples of
Python code.
*ask* around.
You're suggesting a semantic change. Therefore they will be expecting
something different from what actually happens. Result: broken code.
Even if it was; So are many semantic changes happening between python 2
and python 3.
Look at what python 2 did:
range(0,5)[0] 0
range(0,5)[1:3]
[1, 2]

That's a *semantic* change.
Also; if you complain that xrange has been renamed range; then look:
xrange(0,5)[0] 0
xrange(0,5)[1:3]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: sequence index must be integer, not 'slice'

WOW. WOW. WOW. An even BIGGER semantic change.

It's not just mutable objects. It's also objects that can't be copied.
Result: mylist*3 used to work, now it raises an exception. And
performance issues: what used to be fast is now slow.
Where do you get off??; a list can be copied -- the contents might not.
Even if this change was allowed, it would have to go through a multi-year
process.
Fine. if that's normal -- then let them process it the normal way.
That's not my concern in the slightest.
to get the behaviour you want, and then in Python 3.5 it would become the
default. That's three years until it becomes the standard. Meanwhile,
there will still be millions of people using Python 2.7 or 3.2, and their
code will behave differently from your code.
Uh, they aren't *using* the construction I am proposing now -- they are
avoiding it like the plague.
Hence, it will merely become a new ability in a few years -- not
'differently' behaving code.

The rest of your repetitive nonsense has been deleted.
:(
 
I

Ian Kelly

I meant all lists are shallow copied from the innermost level out.
Equivalently, it's a deep copy of list objects -- but a shallow copy of any list contents except other lists.

Why only list objects, though? When a user writes [[]] * 10, they
probably want a list containing ten distinct nested lists. Likewise,
when a user writes [{}] * 10, they probably want a list containing ten
distinct dicts, which is not at all an uncommon thing to want. It
seems very inconsistent that the former should work while the latter
should not. This is especially true when you start mixing the two
paradigms; the user might expect [[{}] * 10] * 10 to create a a 10x10
matrix where each element is a distinct dict, but this still would not
work, even though the nested lists would all have different
identities.

What about ([],) * 10? This is perhaps best interpreted as a request
to create a matrix of ten rows where the rows themselves are mutable
but the collection of rows is not. If list multiplication were to
copy nested lists, then should tuple multiplication do the same?
 
P

Prasad, Ramit

Andrew said:
On Mon, 05 Nov 2012 21:51:24 -0800, Andrew Robinson wrote:

[snip]
Q: What about other mutable objects like sets or dicts?
A: No, the elements are never copied.
They aren't list multiplication compatible in any event! It's a total
nonsense objection.

If these are inconsistent in my idea -- OBVIOUSLY -- they are
inconsistent in Python's present implementation. You can't even
reference duplicate them NOW.

Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unsupported operand type(s) for *: 'dict' and 'int'

z = [ {'a':1} ]*10
z[0]['b'] = 4
z
[{'a': 1, 'b': 4}, {'a': 1, 'b': 4}, {'a': 1, 'b': 4},{'a': 1, 'b': 4},
{'a': 1, 'b': 4}, {'a': 1, 'b': 4}, {'a': 1, 'b': 4}, {'a': 1, 'b': 4},
{'a': 1, 'b': 4},{'a': 1, 'b': 4}]

Should that copy the dictionary? According tological reasoning
it should copy the dictionary as well. How do youdraw the line of
what should be copied and what should not?

Q: How about on Tuesdays? I bet they're copied on Tuesdays.
A: No, the elements are never copied.
That's really a stupid objection, and everyone knows it.

Agreed.[snip]

Q: How about if I use delegation to proxy a list?
A: Oh no, they definitely won't be copied.
Give an example usage of why someone would want to do this. Then we can
discuss it.

IIRC, someone wanted to do something very similarfor dictionaries to
prevent editing of global variables.

nowyou're just repeating yourself to make your contrived list longer --
but there's no new objections...

This is my main objection and one of the flaws of your argument.
You want to handle one type of mutable objects completely separately
than other mutable objects. Why is list any different than dictionary
in this respect? The only reason I can imagine is because lists
end up being used for 2d (or higher) "matrices".

Please link to the objection being proposed to the developers, and their
reasoning for rejecting it.
I think you are exaggerating.

I reject (as a developer) it because it forces me to remember a very
specific quirk versus a simple (logical) rule that applies to all objects. Not to mention that the quirk is not even that useful except for beginners.

List multiplication [x]*n is conceptually equivalent to:
<snip>
This is nice and simple and efficient.
No it isn't efficient. It's *slow* when doneas in your example.

Copying other objects is slow and inefficient. Keeping list
multiplication consistent, and fast, isMUCH more important than making
it work as expected for the rare case of 2D arrays:
I don't think so -- again, look at range(); it was made to work
inconsistent for a "common" case.

Besides, 2D arrays are *not* rare and people *have* to copy internals of
them very often.
The copy speed will be the same or *faster*,and the typing less -- and
the psychological mistakes *less*, the elegance more.

It's hardly going to confuse anyone to say that lists are copied with
list multiplication, but the elements are not.

Every time someone passes a list to a function, they *know* that the
list is passed by value -- and the elements are passed by reference.
People in Python are USED to lists being "the" way to weird behavior
that other languages don't do.

I think you just lost 90% of your credibility (with me). When did lists
get passed by value? Python uses call by sharing[0].

Terminology aside, lists are handled exactly the same way as all
other objects; the rules regarding their mutability in the callee
are the same as dictionaries, sets, or any mutable type (including
non-builtins).


Copying those elements does not come for free.

It is true that list multiplication can bemuch faster than a list comp.
But that's because the list multiply doesn't have to inspect the
elements, copy them, or engage theiteration machinery. Avoiding copying
gives you a big saving:


[steve@ando ~]$ python3.3 -m timeit -s "x =range(1000)"
"[x for _ in range(100)]" # not copied
100000 loops, best of 3: 11.9 usec per loop

[steve@ando utilities]$ python3.3 -m timeit -s "x = range(1000)"
"[x[:] for _ in range(100)]" # copied
10000 loops, best of 3: 103 usec per loop

So there's a factor of ten difference right there. If list multiplication
had to make copies, it would lose much of its speed advantage.
And when multiplication doesn't make copies of*lists*, it's going
"nowhere fast", because people don't want the results that gives.

So what difference does it make? People won't make the construction
unless they wanted to make the copies in the first place. If they want
the copies, well -- copies are *slow*. Big deal.

For large
enough lists, or complicated enough objects, it would become slower than
a list comprehension.
Huh? You're nuts.

It would be even slower if list multiplication had to inspect each
element first and decide whether or not to copy.
A single pointer comparison in a 'C' for loop takes less than 5 nano
seconds on a 1Ghz machine.
(I'll bet yours is faster than that...!)
Consider: list objects have a pointer which points back to the generic
list object -- that's all it takes to determine what the "type" is.

Your measured loop times, doing list comprehensions takes over 10
microseconds *per loop*.
Compared to what you're proposing -- The pointer compare is a mere 0.05%
change; You can't even measure thatwith "timeit!". BUT: The increase
in speed for not running tokenized "for" loops is *much* bigger than the
loss for a single pointercompare; so it will *usually* be a *serious*
net gain.

Anyone who is currently using list multiplication with mutable objects is
expecting that they will be the same object, and relying on that fact.
Otherwise they wouldn't be using list multiplication.
yes, and I'm not changing that -- except for lists; and *no* one is
using that.
Find two examples of it from existing non contrived web examples of
Python code.
*ask* around.

I am positive that majority of code is not examples--web or otherwise.

You're suggesting a semantic change. Therefore they will be expecting
something different from what actually happens. Result: broken code.
Even if it was; So are many semantic changes happening between python 2
and python 3.
Look at what python 2 did:

range(0,5)[0]
0
range(0,5)[1:3]
[1, 2]

That's a *semantic* change.
Also; if you complain that xrange has been renamed range; then look:

xrange(0,5)[0]
0
xrange(0,5)[1:3]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: sequence index must be integer, not 'slice'

WOW. WOW. WOW. An even BIGGER semantic change.

So because one thing has a semantic change that gives license
for semantic changes everywhere? Bah,ridiculous!

[snip]
Uh, they aren't *using* the construction I am proposing now -- they are
avoiding it like the plague.
Hence, it will merely become a new ability in a fewyears -- not
'differently' behaving code.

How in the name of <insert deity (or religion)> do you have any clue
about that? Granted, as an educated you *may* be right, but you
may not be. I have no idea how you could know this definitively
or with any great degree of certainty. [snip]


~Ramit

[0] http://en.wikipedia.org/wiki/Evaluation_strategy#Call_by_sharing


This email is confidential and subject to important disclaimers and
conditions including on offers for the purchase or sale of
securities, accuracy and completeness of information, viruses,
confidentiality, legal privilege, and legal entity disclaimers,
available at http://www.jpmorgan.com/pages/disclosures/email.
 
I

Ian Kelly

They aren't list multiplication compatible in any event! It's a total
nonsense objection.

If these are inconsistent in my idea -- OBVIOUSLY -- they are inconsistent
in Python's present implementation. You can't even reference duplicate them
NOW.


Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unsupported operand type(s) for *: 'dict' and 'int'

The objection is not nonsense; you've merely misconstrued it. If
[[1,2,3]] * 4 is expected to create a mutable matrix of 1s, 2s, and
3s, then one would expect [[{}]] * 4 to create a mutable matrix of
dicts. If the dicts are not copied, then this fails for the same
reason
Give an example usage of why someone would want to do this. Then we can
discuss it.

Seriously? Read a book on design patterns. You might start at SO:

http://stackoverflow.com/questions/832536/when-to-use-delegation-instead-of-inheritance
Please link to the objection being proposed to the developers, and their
reasoning for rejecting it.
I think you are exaggerating.
From Google:

http://bugs.python.org/issue1408
http://bugs.python.org/issue12597
http://bugs.python.org/issue9108
http://bugs.python.org/issue7823

Note that in two out of these four cases, the reporter was trying to
multiply lists of dicts, not just lists of lists.
Besides, 2D arrays are *not* rare and people *have* to copy internals of
them very often.
The copy speed will be the same or *faster*, and the typing less -- and the
psychological mistakes *less*, the elegance more.

List multiplication is not potentially useful for copying 2D lists,
only for initializing them. For copying an existing nested list,
you're still stuck with either copy.deepcopy() or a list
comprehension.
It's hardly going to confuse anyone to say that lists are copied with list
multiplication, but the elements are not.

Every time someone passes a list to a function, they *know* that the list is
passed by value -- and the elements are passed by reference. People in
Python are USED to lists being "the" way to weird behavior that other
languages don't do.

Incorrect. Python uses what is commonly known as call-by-object, not
call-by-value or call-by-reference. Passing the list by value would
imply that the list is copied, and that appends or removes to the list
inside the function would not affect the original list. This is not
what Python does; the list inside the function and the list passed in
are the same list. At the same time, the function does not have
access to the original reference to the list and cannot reassign it by
reassigning its own reference, so it is not call-by-reference
semantics either.
 
M

MRAB

They aren't list multiplication compatible in any event! It's a total
nonsense objection.

If these are inconsistent in my idea -- OBVIOUSLY -- they are inconsistent
in Python's present implementation. You can't even reference duplicate them
NOW.


Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unsupported operand type(s) for *: 'dict' and 'int'

The objection is not nonsense; you've merely misconstrued it. If
[[1,2,3]] * 4 is expected to create a mutable matrix of 1s, 2s, and
3s, then one would expect [[{}]] * 4 to create a mutable matrix of
dicts. If the dicts are not copied, then this fails for the same
reason
Give an example usage of why someone would want to do this. Then we can
discuss it.

Seriously? Read a book on design patterns. You might start at SO:

http://stackoverflow.com/questions/832536/when-to-use-delegation-instead-of-inheritance
Please link to the objection being proposed to the developers, and their
reasoning for rejecting it.
I think you are exaggerating.
From Google:

http://bugs.python.org/issue1408
http://bugs.python.org/issue12597
http://bugs.python.org/issue9108
http://bugs.python.org/issue7823

Note that in two out of these four cases, the reporter was trying to
multiply lists of dicts, not just lists of lists.
Besides, 2D arrays are *not* rare and people *have* to copy internals of
them very often.
The copy speed will be the same or *faster*, and the typing less -- and the
psychological mistakes *less*, the elegance more.

List multiplication is not potentially useful for copying 2D lists,
only for initializing them. For copying an existing nested list,
you're still stuck with either copy.deepcopy() or a list
comprehension.
It's hardly going to confuse anyone to say that lists are copied with list
multiplication, but the elements are not.

Every time someone passes a list to a function, they *know* that the list is
passed by value -- and the elements are passed by reference. People in
Python are USED to lists being "the" way to weird behavior that other
languages don't do.

Incorrect. Python uses what is commonly known as call-by-object, not
call-by-value or call-by-reference. Passing the list by value would
imply that the list is copied, and that appends or removes to the list
inside the function would not affect the original list. This is not
what Python does; the list inside the function and the list passed in
are the same list. At the same time, the function does not have
access to the original reference to the list and cannot reassign it by
reassigning its own reference, so it is not call-by-reference
semantics either.
I prefer the term "reference semantics".
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
474,144
Messages
2,570,823
Members
47,369
Latest member
FTMZ

Latest Threads

Top