pygame - importing GL - very bad...

S

someone

On 13-01-02 08:53 PM, someone wrote:
I'd agree on it being rather impractical/pointless/verbose to have every
single OpenGL entry point and constant have an extra gl. or glu. or
glut. added to the front. OpenGL/GLU/GLUT is already namespaced, but

Good to hear, I'm not alone, thanks.
using C-style prefix namespacing (that is gl* glu* glut* and GL_*,
GLU_*, GLUT_*), so adding Python style namespacing to the front of that
makes it very verbose. OpenGL-using code is *littered* with OpenGL
entry points and constants (and yes, I intend the slight slight), so
that's going to make it rather annoying to work with.

PyOpenGL's current approach is mostly attempting to maintain backward
compatibility with the older revisions. wxPython actually rewrote its
whole interface to go from * imports into namespaced lookups and then
wrote a little migration tool that would attempt to rewrite your code
for the new version. They also provided a transitional API so that code
could mix-and-match the styles. For PyOpenGL that would look something
like this:

from OpenGL import gl, glu, glut

gl.Rotate(...)
gl.Clear(gl.COLOR_BUFFER_BIT)

Ok, that's interesting. However, I like it the way it is, where I can
copy/paste C-code from the web and change some small things and it'll
work and fit into my needs. BUT I didn't know there was such a
transitional API - interesting. I however don't want to be a first-mover
- let's see if sufficiently many people will use this and then I'll
consider doing it too. At the moment, I still think the star-import is
good because it makes it easy to look for C-code and program it (=do it)
like people would do it in C.
or, if you really needed PEP-8 compliance, and don't mind making the API
look nothing like the original, we might even go to:

from opengl import gl, glu, glut

gl.rotate(...)
gl.clear(gl.COLOR_BUFFER_BIT)

Erhm, that's the same as above. Is that what you meant to write?
Either of which would *also* make it possible for us to lazy-load the
entry points and symbols (that would save quite a bit of ram).

But I'm not actually likely to do this, as it makes it far more annoying
to work with C-oriented references (and since PyOpenGL is primarily used
by new OpenGL coders who need to lean heavily on references, that's a
big deal). Currently you can often copy-and-paste C code into PyOpenGL
and have it work properly as far as the OpenGL part is concerned (arrays
and the like need to be rewritten, but that's not something I can
control, really). People are already confused by the small variations

Agree - that's something I like.
from C OpenGL, making the API look entirely different wouldn't be a good
direction to move, IMO.

Well, I'm sometimes a bit annoyed that python doesn't give as many
warnings/errors as one gets in C - for instance sometimes I copy/paste
and forget to remove the trailing semi-colons. However after I began to
use pylint and friends, this error will be caught. Then sometimes I
forgot to add () for function calls, which in C would cause an error by
the compiler but which python allows so one can get the object (which
maybe is also a good reason, but at least in the beginning when I
learned python, this was very annoying + confusing to me).

Thanks for the comments about this transitional opengl-stuff - I was
unaware of that. Currently it's not so interesting to me, I like it the
way I do it now so I can quickly grab code pieces from C/C++ from the
web and change it to suit my needs...
 
S

someone

Doesn't this "[ ... ]" mean something optional?

What does {2,30}$ mean?

I think $ means that the {2,30} is something in the end of the sentence...

You can find regular expression primers all over the internet, but to
answer these specific questions: [...] means any of the characters in
the range; the $ means "end of string"; and {2,30} means at least two,
and at most thirty, of the preceding character. So you have to have
one from the first group, then 2-30 from the second, for a total of
3-31 characters in your names.

Got it, thanks!
 
S

someone

someone said:
Terry Reedy wrote:

[a-z_][a-z0-9_]{2,30}$) - so I suppose it wants this name to end with
[an
underscore ?

No, it allows underscores. As I read that re, 'rx', etc, do match. They

No, it's one leading letter or underscore [a-z_] plus at least two
letters, underscores or digits [a-z0-9_]{2,30}

Ah, [a-z0-9_]{2,30} means there should be at least two characters and
maximum 30 characters here ?

Yes. See

http://docs.python.org/2/library/re.html#regular-expression-syntax

Thanks - it's on my TODO-list to learn more about how to use these
regexps...
 
S

someone

I quite agree. Wanting 3 chars for attribute names is not even PEP-8
style but pylint-author style. I was really surprised at that. In that
case, 'Does not match pylint recommended style.' or even 'configured
styles'. I have not used pylint or pychecker as of yet.

I agree with you all...

Thanks, everyone - now I shall investigate pylint and friends in more
detail on my own :)
 
S

someone

The first lint program I recall hearing of was available in the early
1980's, and was for the C language. At the time, the C language was
extremely flexible (in other words, lots of ways to shoot yourself in
the foot) and the compiler was mostly of the philosophy - if there's a
way to make sense of the statement, generate some code, somehow.

Anyway, lint made sense to me as the crud that gets mixed in with the
real fabric. And a linter is a machine that identifies and removes that
crud. Well, the lint program didn't remove anything, but it identified
a lot of it. I didn't hear the term linter till decades later.

Aah, now I understand this "lintering" and where it came from - thanks a
lot! :)
 
D

Dave Angel

Ok, that's interesting. However, I like it the way it is, where I can
copy/paste C-code from the web and change some small things and it'll
work and fit into my needs. BUT I didn't know there was such a
transitional API - interesting. I however don't want to be a
first-mover - let's see if sufficiently many people will use this and
then I'll consider doing it too. At the moment, I still think the
star-import is good because it makes it easy to look for C-code and
program it (=do it) like people would do it in C.


Erhm, that's the same as above. Is that what you meant to write?

No, it's not the same; here he did not capitalize the function names.
Previously they look like class instantiations.
<snip>

Well, I'm sometimes a bit annoyed that python doesn't give as many
warnings/errors as one gets in C - for instance sometimes I copy/paste
and forget to remove the trailing semi-colons.

Trailing semi colons are legal in most cases. The semi-colon is a
separator between statements, when one wants to put multiple statements
on one line.
However after I began to use pylint and friends, this error will be
caught. Then sometimes I forgot to add () for function calls, which in
C would cause an error by the compiler

Actually no. In C, a function name without parentheses is also a
function pointer. Not exactly the same as a function object, though C++
gets a lot closer. But the real reason C catches that typo is that the
types most likely don't match, depending on what you meant to do with
the return value.
but which python allows so one can get the object (which maybe is also
a good reason, but at least in the beginning when I learned python,
this was very annoying + confusing to me).

Function objects are enormously useful, as you get more adept at using
Python.
 
S

someone

No, it's not the same; here he did not capitalize the function names.
Previously they look like class instantiations.

Ah, you're right. Sorry :)
Trailing semi colons are legal in most cases. The semi-colon is a
separator between statements, when one wants to put multiple statements
on one line.


Actually no. In C, a function name without parentheses is also a
function pointer. Not exactly the same as a function object, though C++
gets a lot closer. But the real reason C catches that typo is that the
types most likely don't match, depending on what you meant to do with
the return value.

Ok, I think you're right. At least I find that C-compilers catches many
errors/warnings which python don't say anything about. But also C
require me to define/declarer the types of variables before I use
them... OTOH I guess I like that python is faster to code in, compared
to C...
Function objects are enormously useful, as you get more adept at using
Python.

Ok, I'll look forward to that. Recently I had some problems with
pass-by-value vs pass-by-reference. I googled the problem and found that
by default python passes by reference. I then debugged my program and
finally found out that I should make a copy (a new object) of the
variable, before I passed it to my function. And THIS solved my problem.
But in C/C++ I think the default is to pass by value, so this error in
my program was a bit unexpected... Anyway, I feel python is a great
language for doing things much faster than I could possibly do in C/C++.

I also have on my todo-list to take my opengl-program and make it into
an executable. I mainly sit on a linux-pc but I want to distribute my
opengl program for windows (which has most users). I've found something
on google for py2app, cx_Freeze, bbfreeze and Freeze and I hope this
cross-platform thing does not cause too many problems... I tried one of
these tools a few weeks ago but I think I only succeeded with a very
simple hello-world program... Anyway, that's a problem I'll investigate
in a few months when I think/hope my opengl program is finished...
 
D

Dave Angel

On 01/05/2013 02:30 AM, Dave Angel wrote:


Ok, I'll look forward to that. Recently I had some problems with
pass-by-value vs pass-by-reference. I googled the problem and found
that by default python passes by reference.

Pascal has two calling conventions (value and reference). C always
calls by value. C++ adds a call by reference, and maybe later C
standards have added it as well.

Python always calls by object. In fact, one could argue that there are
no values (in the C sense) in Python. All names, and all attributes,
and all slots in collections, are references to objects. So when you
call a function, what you're doing is telling the function how to
reference the same object(s). The usual way to say that is that the
function call binds a new name to an existing object.

If that object is mutable, then perhaps you wanted to do a copy first.
Or perhaps you didn't. As you say, you debugged the program to find out.
 
C

Chris Angelico

Ok, I think you're right. At least I find that C-compilers catches many
errors/warnings which python don't say anything about. But also C require me
to define/declarer the types of variables before I use them... OTOH I guess
I like that python is faster to code in, compared to C...

C has typed variables, so it's a compile-time error to try to put any
other type into that variable. Python doesn't. That flexibility comes
at the cost of error-catching. There are hybrid systems, but in
general, type declarations imply variable declarations, and that's
something that Python doesn't want. (I'm of the opinion that
declarations aren't such a bad thing; they make some aspects of
scoping easier. However, that's a design decision that Python is as
unlikely to reverse as indentation-defined blocks.)
Ok, I'll look forward to that. Recently I had some problems with
pass-by-value vs pass-by-reference. I googled the problem and found that by
default python passes by reference.

No, it doesn't. You can find good references on the subject in various
places, but call-by-reference as implemented in Pascal simply doesn't
exist in most modern languages, because its semantics are way
confusing. The name given to this technique varies; here's a couple of
links:

http://effbot.org/zone/call-by-object.htm
http://en.wikipedia.org/wiki/Evaluation_strategy#Call_by_sharing

ChrisA
 
J

Jan Riechers

Doesn't this "[ ... ]" mean something optional?

What does {2,30}$ mean?

I think $ means that the {2,30} is something in the end of the
sentence...

You can find regular expression primers all over the internet, but to
answer these specific questions: [...] means any of the characters in
the range; the $ means "end of string"; and {2,30} means at least two,
and at most thirty, of the preceding character. So you have to have
one from the first group, then 2-30 from the second, for a total of
3-31 characters in your names.

Got it, thanks!

Just following the discussion which is very interesting, even so when
not working with OpenGL (which looks like a nightmare with that naming
space) :)

But about the regular expressions (a bit deeper look into that):
Like said of Chris:

[a-z]
defines a "catching group", in this case all ascii lowercase letters
ranging from "a" to "z". If noting else is provided, the rule matches
one letter if there is no range defined by something like:
{} -> Target a range of a match/hit

There are also a
? -> Lazy match
* -> Gready match

[A-Z][0-9]{1,3} means translated:
Look for any uppercase letter in ascii(!) (not "öäü" or similiar)
ranging from "A" to "Z".

Now look for any digit (2nd catching group) with the addition to satisfy
the rule ONLY if there are at least 1 to 3 digits found.
Note: If there are 4 or more digits - the catching rule is still
satisfied and will provide a match/hit.

If there is a follow up group, the next evaluation is gone through if
present and so forth. If the expression is satisfied, the match is returned.

The lazy "?" and greedy "*" matches try to satisfy, as the naming
implies, to match as less or as much of what you have asked for.

For example the regular expression is valid:
0* -> Look for a zero, and be greedy as of how many zeros you want match
which might follow.

Regular expressions don't have to include catching groups in order to work.

But when you use them yourself somehow, its quite simple I think.
I guess you are anyhow busy mangling with pyLint, PEP-Standards and
pyOpenGL - so good luck with that :)

Jan Riechers
 
S

someone

C has typed variables, so it's a compile-time error to try to put any
other type into that variable. Python doesn't. That flexibility comes
at the cost of error-catching. There are hybrid systems, but in
general, type declarations imply variable declarations, and that's
something that Python doesn't want. (I'm of the opinion that
declarations aren't such a bad thing; they make some aspects of
scoping easier. However, that's a design decision that Python is as
unlikely to reverse as indentation-defined blocks.)
Understood.


No, it doesn't. You can find good references on the subject in various
places, but call-by-reference as implemented in Pascal simply doesn't
exist in most modern languages, because its semantics are way
confusing. The name given to this technique varies; here's a couple of
links:

http://effbot.org/zone/call-by-object.htm
http://en.wikipedia.org/wiki/Evaluation_strategy#Call_by_sharing

If you don't like calling it "pass-by-reference", perhaps you prefer
calling it: “call by object reference“... From:
http://effbot.org/zone/call-by-object.htm

In any case I think we understand each other.
 
S

someone

On 05.01.2013 03:11, someone wrote:
But about the regular expressions (a bit deeper look into that):
Like said of Chris:

[a-z]
defines a "catching group", in this case all ascii lowercase letters
ranging from "a" to "z". If noting else is provided, the rule matches
one letter if there is no range defined by something like:
{} -> Target a range of a match/hit

There are also a
? -> Lazy match
* -> Gready match

[A-Z][0-9]{1,3} means translated:
Look for any uppercase letter in ascii(!) (not "öäü" or similiar)
ranging from "A" to "Z".

Now look for any digit (2nd catching group) with the addition to satisfy
the rule ONLY if there are at least 1 to 3 digits found.
Note: If there are 4 or more digits - the catching rule is still
satisfied and will provide a match/hit.

Ok, thanks a lot for the elaboration... I think I need to work with it
myself at some time to be sure of understanding it...
If there is a follow up group, the next evaluation is gone through if
present and so forth. If the expression is satisfied, the match is
returned.

The lazy "?" and greedy "*" matches try to satisfy, as the naming
implies, to match as less or as much of what you have asked for.

For example the regular expression is valid:
0* -> Look for a zero, and be greedy as of how many zeros you want match
which might follow.

Regular expressions don't have to include catching groups in order to work.

But when you use them yourself somehow, its quite simple I think.
I guess you are anyhow busy mangling with pyLint, PEP-Standards and
pyOpenGL - so good luck with that :)

You're right - I'm a bit "overbooked" at the moment - but thanks a lot
for clarifyring this with the regexps :)
 
C

Chris Angelico

If you don't like calling it "pass-by-reference", perhaps you prefer calling
it: “call by object reference“... From:
http://effbot.org/zone/call-by-object.htm

In any case I think we understand each other.

That's one of the links I just posted :) It's not just a naming
difference, though. With Pascal's pass-by-reference semantics, this
code would act differently:

def foo(x):
x = 5

a = 2
foo(a)
print(a)

Python prints 2, because the assignment to x just rebinds the name
inside foo. True pass-by-reference actually changes the caller's
variable. C can achieve this by means of pointers; in Python, you can
pass and mutate a list, thus:

def foo(x):
x[0] = 5 # Dereference the pointer, kinda

x=[None] # Declare a pointer variable, ish
x[0] = 2
foo(x) # Don't forget to drop the [0] when passing the pointer to
another function
print(x[0]) # Prints 5. See? We have pass-by-reference!

But otherwise, rebinding names in the function has no effect on
anything outside. Among other things, this guarantees that, in any
situation, a name referencing an object can be perfectly substituted
for any other name referencing the same object, or any other way of
accessing the object.

def foo(lst):
lst[0]=len(lst)

x = [10,20,30]
y = x
foo(x) # These two...
foo(y) # ... are identical!

This is a philosophy that extends through the rest of the language. A
function returning a list can be directly dereferenced, as can a list
literal:

def foo():
return [0,1,4,9,16]

print( ["Hello","world!","Testing","Testing","One","Two","Three"][foo()[2]])

This is a flexibility and power that just doesn't exist in many older
languages (C and PHP, I'm looking at you). Object semantics make more
sense than any other system for a modern high level language, which is
why many of them do things that way.

ChrisA
 
S

someone

That's one of the links I just posted :) It's not just a naming
difference, though. With Pascal's pass-by-reference semantics, this
code would act differently:

def foo(x):
x = 5

a = 2
foo(a)
print(a)

Python prints 2, because the assignment to x just rebinds the name
inside foo. True pass-by-reference actually changes the caller's
variable. C can achieve this by means of pointers; in Python, you can

I thought that python also used "true" pass-by-reference, although I
haven't figured out exactly when I have this problem. I can just see
that sometimes I get this problem and then I need to copy the variable,
if I don't want the original data of the variable to be overwritten...
pass and mutate a list, thus:

def foo(x):
x[0] = 5 # Dereference the pointer, kinda

x=[None] # Declare a pointer variable, ish
x[0] = 2
foo(x) # Don't forget to drop the [0] when passing the pointer to
another function
print(x[0]) # Prints 5. See? We have pass-by-reference!

Yes, something like this has happened to me in my python code... Not
sure if my example was exactly like this, but I remember something where
I found this to be a problem that I had to fix.
But otherwise, rebinding names in the function has no effect on
anything outside. Among other things, this guarantees that, in any

hmm. ok.... So it's not true pass-by-reference like I thought... That's
interesting.
situation, a name referencing an object can be perfectly substituted
for any other name referencing the same object, or any other way of
accessing the object.

def foo(lst):
lst[0]=len(lst)

x = [10,20,30]
y = x
foo(x) # These two...
foo(y) # ... are identical!

This is something I've experienced from my own coding, I think...
This is a philosophy that extends through the rest of the language. A
function returning a list can be directly dereferenced, as can a list
literal:

def foo():
return [0,1,4,9,16]

print( ["Hello","world!","Testing","Testing","One","Two","Three"][foo()[2]] )

That prints out "One"... I think I understand - that's interesting too...
This is a flexibility and power that just doesn't exist in many older
languages (C and PHP, I'm looking at you). Object semantics make more
sense than any other system for a modern high level language, which is
why many of them do things that way.

I agree, that I think python is really great and it's fast to do
something useful. Not sure I still understand exactly all aspects of
this pass-by-value and by-references, but in any case I know enough to
watch out for this problem and once I see I have a problem, I can also
take care of it and solve it by making a copy. I think maybe after 3-6-9
months more of working with python, I should be fully confident with
this. Thanks for taking the time to explain a bit of this to me...
 
A

alex23

I thought that python also used "true" pass-by-reference, although I
haven't figured out exactly when I have this problem. I can just see
that sometimes I get this problem and then I need to copy the variable,
if I don't want the original data of the variable to be overwritten...

Generally I find it easier to call variables 'labels' or 'references';
you're not stashing a value into a slot, you're giving a name to an
object. So you're never really passing values around, just labels that
refer to an object.

The confusion kicks in because there are two types of object: mutable
and immutable. Mutable objects can change, immutable objects cannot.
Operations on an immutable object will return a _new_ immutable
object; the label used in an operation on an immutable object will
refer to the new object, any other references to the original object
will continue to refer to the original. Strings, numbers, tuples and
frozensets are all immutable built-in types.
'alpha'

With immutable types, you're safe to pass them into a function, act on
them, and not have references in the outer scope reflect the change:
'alpha'

Everything else, including user defined objects, tend to be mutable:
a = dict(foo=1,bar=2)
b = a
a['foo'] = 99
a {'foo': 99, 'bar': 2}
b
{'foo': 99, 'bar': 2}

With mutable objects, you're _always_ operating on the _same_ object
that everything is referring to, even when you pass it into a
different scope:
def toggle_foo( x ): x['foo'] = not x['foo'] ...
a = dict(foo=True)
toggle_foo(a)
a
{'foo': False}

Note that the `toggle_foo` function doesn't return anything, nor is
the value of a re-assigned. The object that a refers to is passed into
`toggle_foo`, modified in place, and all references to it remain
pointing to the same, now modified object.

This is one of the big causes of unexpected behaviour to new Python
users, but as you get a better understanding of how Python's object
model works, you'll see it's really quite consistent and predictable.

There are a couple of ways you can ensure you're always working with a
copy of an object if you need to. For lists, the canonical way is:
a = [1,2,3]
b = a
a = [1, 2, 3]
b = a[:] # shallow copy of a
b.append(99)
b [1, 2, 3, 99]
a
[1, 2, 3]

`b = a[:]` uses slice notation to create a new list that contains
everything in the original list, from start to end. This is called a
"shallow" copy; `a` and `b` both refer to _different_ lists, but the
objects within are the _same_ objects. For numbers, this isn't
important, as they're immutable. For mutable objects, however, it's
something you need to bear in mind:
a = [ [1,2], [3, 4] ] # list of lists
b = a[:]
b[0][0] = 99
b [[99, 2], [3, 4]]
a
[[99, 2], [3, 4]]

So here, while `b` refers to copy of `a`, that copy contains the
_same_ list objects that `a` does, so any changes to the internal
lists will be reflected in both references, while any changes to `b`
itself won't be:
b.append([5,6])
b [[99, 2], [3, 4], [5, 6]]
a
[[99, 2], [3, 4]]

This is where the `copy` module comes to the rescue, providing a
shallow copy function `copy`, as well as `deepcopy` that makes copies
of all the objects within the object:
import copy
a = [ [1,2], [3, 4] ] # list of lists
b = copy.deepcopy(a)
b[0][0] = 99
b [[99, 2], [3, 4]]
a
[[1, 2], [3, 4]]

If you plan on using the `copy` module, it's worthwhile readings it
docs, as there are some nuances to it that I'm glossing over here.
TBH, I don't recall every needing to use `copy` in my code.

Hope this helps bring consistency where you currently find confusion :)
 
S

someone

Generally I find it easier to call variables 'labels' or 'references';
you're not stashing a value into a slot, you're giving a name to an
object. So you're never really passing values around, just labels that
refer to an object.
Ok.

The confusion kicks in because there are two types of object: mutable
and immutable. Mutable objects can change, immutable objects cannot.

Yes, I've seen that described before.
Operations on an immutable object will return a _new_ immutable
object; the label used in an operation on an immutable object will
refer to the new object, any other references to the original object
will continue to refer to the original. Strings, numbers, tuples and
frozensets are all immutable built-in types.

'alpha'

Ok, I think I knew some of these things, however I didn't think so much
about them before now.
With immutable types, you're safe to pass them into a function, act on
them, and not have references in the outer scope reflect the change:

'alpha'

This is exactly what I've found out happens to me some times. Until now
I've managed to fix my problems. But it's good to remember this thing
with immutable vs. mutable types, which was something I didn't think
much about before. I'll think about this difference in the future, thank
you.
Everything else, including user defined objects, tend to be mutable:
a = dict(foo=1,bar=2)
b = a
a['foo'] = 99
a {'foo': 99, 'bar': 2}
b
{'foo': 99, 'bar': 2}

Yes, I've noticed this a lot of times in my own coding...
With mutable objects, you're _always_ operating on the _same_ object
that everything is referring to, even when you pass it into a
different scope:
def toggle_foo( x ): x['foo'] = not x['foo'] ...
a = dict(foo=True)
toggle_foo(a)
a
{'foo': False}

Exactly, what I've also experienced a few times.
Note that the `toggle_foo` function doesn't return anything, nor is
the value of a re-assigned. The object that a refers to is passed into
`toggle_foo`, modified in place, and all references to it remain
pointing to the same, now modified object.

Yes, I notice that, thanks.
This is one of the big causes of unexpected behaviour to new Python
users, but as you get a better understanding of how Python's object
model works, you'll see it's really quite consistent and predictable.

It was a bit surprising to me in the beginning - though I'm still a
beginner (or maybe now almost an "intermediate" user), the good
explanation you come with now wasn't something I've thought so much of
before now. But I've clearly experienced many of the examples you write
about here, in my own coding and I've usually been very careful about
checking if my original object was modified un-intentionally. I think I
can deal with this now.
There are a couple of ways you can ensure you're always working with a
copy of an object if you need to. For lists, the canonical way is:
a = [1,2,3]
b = a
a = [1, 2, 3]
b = a[:] # shallow copy of a
b.append(99)
b [1, 2, 3, 99]
a
[1, 2, 3]

`b = a[:]` uses slice notation to create a new list that contains
everything in the original list, from start to end. This is called a
"shallow" copy; `a` and `b` both refer to _different_ lists, but the
objects within are the _same_ objects. For numbers, this isn't

Ok, good to know.
important, as they're immutable. For mutable objects, however, it's
something you need to bear in mind:
a = [ [1,2], [3, 4] ] # list of lists
b = a[:]
b[0][0] = 99
b [[99, 2], [3, 4]]
a
[[99, 2], [3, 4]]

So here, while `b` refers to copy of `a`, that copy contains the
_same_ list objects that `a` does, so any changes to the internal
lists will be reflected in both references, while any changes to `b`
itself won't be:
b.append([5,6])
b [[99, 2], [3, 4], [5, 6]]
a
[[99, 2], [3, 4]]

Yes, I've experienced this kind of thing before - but it's a very very
good explanation you give me know. It makes me understand the problem
much better, next time I experience it...
This is where the `copy` module comes to the rescue, providing a
shallow copy function `copy`, as well as `deepcopy` that makes copies
of all the objects within the object:
import copy
a = [ [1,2], [3, 4] ] # list of lists
b = copy.deepcopy(a)
b[0][0] = 99
b [[99, 2], [3, 4]]
a
[[1, 2], [3, 4]]

If you plan on using the `copy` module, it's worthwhile readings it
docs, as there are some nuances to it that I'm glossing over here.
TBH, I don't recall every needing to use `copy` in my code.

I almost had to use this "copy.deepcopy" the other day, but I googled
for it and then I found a way to avoid it...
Hope this helps bring consistency where you currently find confusion :)

Yes, this is a VERY very good explanation. I was a bit confused about
when I created my own user defined objects, but now you explained that
these tend to be mutable which is also my experience.

I'll still keep an extra eye out for this special way of sending objects
back and forward between function, in order to un-intentionally
overwrite some data...

Thanks you very much for a very good and precise description of this
phenomena of the python-language (or what I should call it) :)
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,995
Messages
2,570,228
Members
46,817
Latest member
AdalbertoT

Latest Threads

Top