S
Sam
My understanding of Python tuples is that they are like immutable lists. If this is the cause, why can't we replace tuples with lists all the time (just don't reassign the lists)? Correct me if I am wrong.
My understanding of Python tuples is that they are like immutable lists. If this is the cause, why can't we replace tuples with lists all the time (just don't reassign the lists)? Correct me if I am wrong.
{(1, 2): 'Hello', (1, 3): 'World', (2, 3): '!'}mapping = {}
key = (1,2)
mapping[key] = "Hello"
key = (1,3)
mapping[key] = "World"
key = (2,3)
mapping[key] = "!"
mapping
Traceback (most recent call last):key = [1,2]
mapping[key]
'World'mapping[1,3]
Falselst1 = [1,2]
lst2 = [1,3]
lst1 == lst2 False
lst1[1] += 1
lst1 == lst2 True
lst1[1] += 1
lst1 == lst2
Sam said:My understanding of Python tuples is that they are like immutable
lists. If this is the cause, why can't we replace tuples with lists
all the time (just don't reassign the lists)?
======My understanding of Python tuples is that they are like immutable lists. If this is the cause, why can't we replace tuples with lists all the time (just don't reassign the lists)? Correct me if I am wrong.
Grant Edwards said:In constrast, tuples are often used as fixed-length heterogenous
containers (more like a struct in C except the fields are named 0, 1,
2, 3, etc.). In a particular context, the Nth element of a tuple will
always mean one thing (e.g. a person's last name) while the Mth
element will always be something else (e.g. a person's age).
And, of course, namedtuples make that much more explicit.
It also appears that tuples are more memory efficient. I just ran some
quick tests on my OSX box. Creating a list of 10 million [1, 2, 3, 4,
5] lists gave me a 1445 MB process. The name number of (1, 2, 3, 4, 5)
tuples was 748 MB. I'm sure this is implementation dependent, but it
seems plausible to assume similar results will be had on other
implementations.
On 23/02/2014 17:48, Roy Smith wrote:It also appears that tuples are more memory efficient. I just ran some
quick tests on my OSX box. Creating a list of 10 million [1, 2, 3, 4,
5] lists gave me a 1445 MB process. The name number of (1, 2, 3, 4, 5)
tuples was 748 MB. I'm sure this is implementation dependent, but it
seems plausible to assume similar results will be had on other
implementations.
In CPython a list is overallocated so there's usually spare slots
available if you want to add something to it. In contrast you know when
you create the tuple just how big it is so no overallocation is needed.
Beyond that, you get into religious territory.
Want to reply to this thread or ask your own question?
You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.