Syntactic sugar for assignment statements: one value to multiple targets?

G

gc

Hi everyone! Longtime lurker, hardly an expert, but I've been using
Python for various projects since 2007 and love it.

I'm looking for either (A) suggestions on how to do a very common
operation elegantly and Pythonically, or (B) input on whether my
proposal is PEP-able, assuming there's no answer to A. (The proposal
is sort of like the inverse of PEP 3132; I don't think it has been
proposed before, sorry if I missed it.)

Anyway, I frequently need to initialize several variables to the same
value, as I'm sure many do. Sometimes the value is a constant, often
zero; sometimes it's more particular, such as defaultdict(list). I use
dict() below.

Target lists using comma separation are great, but they don't work
very well for this task. What I want is something like

a,b,c,d,e = *dict()

where * in this context means something like "assign separately to
all." I'm not sure that * would the best sugar for this, but the
normal meaning of * doesn't seem as if it would ever be valid in this
case, and it somehow feels right (to me, anyway).

Statements fitting the form above would get expanded during parsing to
a sequence of separate assignments (a = dict(); b = dict(); c = dict()
and so forth.) That's all there is to it. Compared to the patterns
below, it's svelte, less copy-paste-y (so it removes an opportunity
for inconsistency, where I remember to change a-d to defaultdict(list)
but forget with e), and it doesn't require me to keep count of the
number of variables I'm initializing.

This would update section 6.2 of the language reference and require a
small grammar expansion.

But: Is there already a good way to do this that I just don't know?
Below, I compare four obvious patterns, three of which are correct but
annoying and one of which is incorrect in a way which used to surprise
me when I was starting out.

# Option 1 (separate lines)
# Verbose and annoying, particularly when the varnames are long and of
irregular length

a = dict()
b = dict()
c = dict()
d = dict()
e = dict()

# Option 2 (one line)
# More concise but still pretty annoying, and hard to read (alternates
variables and assignments)

a = dict(); b = dict(); c = dict(); d = dict(); e = dict()

# Option 3 (multiple target list: this seems the most Pythonic, and is
normally what I use)
# Concise, separates variables from assignments, but somewhat
annoying; have to change individually and track numbers on both sides.

a,b,c,d,e = dict(),dict(),dict(),dict(),dict()

# Option 4 (iterable multiplication)
# Looks better, and if the dict() should be something else, you only
have to change it once, but the extra brackets are ugly and you still
have to keep count of the targets...

a,b,c,d,e = [dict()] * 5

# and it will bite you...
a[1] = 1
b {1: 1}
id(a) == id(b)
True

# Gotcha!

# Other forms of 4 also have this behavior:

a,b,c,d,e = ({},) * 5
{1: 1}

Alternatively, is there a version of iterable multiplication that
creates new objects rather than just copying the reference? That would
solve part of the problem, though it would still look clunky and you'd
still have to keep count.

Any thoughts? Thanks!
 
C

Chris Angelico

Anyway, I frequently need to initialize several variables to the same
value, as I'm sure many do. Sometimes the value is a constant, often
zero; sometimes it's more particular, such as defaultdict(list). I use
dict() below.

If it's an immutable value (such as a constant integer), you can use
syntax similar to C's chained assignment:

a=b=c=0

If you do this with dict(), though, it'll assign the same dictionary
to each of them - not much use.
# Option 3 (multiple target list: this seems the most Pythonic, and is
normally what I use)
# Concise, separates variables from assignments, but somewhat
annoying; have to change individually and track numbers on both sides.

a,b,c,d,e = dict(),dict(),dict(),dict(),dict()

I think this is probably the best option, although I would be inclined
to use dictionary-literal syntax:
a,b,c,d,e = {},{},{},{},{}

It might be possible to do something weird with map(), but I think
it'll end up cleaner to do it this way.

Chris Angelico
 
S

Steven D'Aprano

gc said:
Target lists using comma separation are great, but they don't work
very well for this task. What I want is something like

a,b,c,d,e = *dict()


a, b, c, d, e = [dict() for i in range(5)]

Unfortunately there is no way of doing so without counting the assignment
targets. While slightly ugly, it doesn't seem ugly enough to justify the
extra complexity of special syntax for such a special case.
 
G

Gregory Ewing

gc said:
Alternatively, is there a version of iterable multiplication that
creates new objects rather than just copying the reference?

You can use a list comprehension:

a, b, c, d, e = [dict() for i in xrange(5)]

or a generator expression:

a, b, c, d, e = (dict() for i in xrange(5))
 
T

Tim Chase

gc said:
Target lists using comma separation are great, but they don't work
very well for this task. What I want is something like

a,b,c,d,e = *dict()


a, b, c, d, e = [dict() for i in range(5)]

Unfortunately there is no way of doing so without counting the assignment
targets. While slightly ugly, it doesn't seem ugly enough to justify the
extra complexity of special syntax for such a special case.

I understand that in Py3k (and perhaps back-ported into later 2.x
series?) one can do something like

a, b, c, d, e, *junk = (dict() for _ in range(9999))

to prevent the need to count. However, I was disappointed with
all the generator-ification of things in Py3k, that this "and the
rest" syntax slurps up the entire generator, rather than just
assigning the iterator. That would much more tidily be written as

a,b,c,d,e, *more_dict_generator = (dict() for _ in itertools.count())

(itertools.count() happening to be a infinite generator). I can
see the need to slurp if you have things afterward:

a,b,c, *junk ,d,e = iterator(...)

but when the "and the rest" is the last one, it would make sense
not to force a slurp.

-tkc
 
T

Tim Chase

a, b, c, d, e = [dict() for i in range(5)]

I think this is good code -- if you want five different dicts,
then you should call dict five times. Otherwise Python will
magically call your expression more than once, which isn't
very nice. And what if your datatype constructor has
side-effects?

If the side-effects are correct behavior (perhaps opening files,
network connections, or even updating a class variable) then
constructor side-effects are just doing what they're supposed to.
E.g. something I use somewhat regularly in my code[*]:

a,b,c,d = (file('file%i.txt', 'w') for i in range(4))

If the side-effects aren't performing the correct behavior, fix
the constructor. :)

-tkc


[*] okay, it's more like

(features,
adjustments,
internet,
) = (file(fname) for fname in (
'features.txt',
'adjustments.txt',
'internet.txt'
)

or even

(features,
adjustments,
internet,
) = (
set(
line.strip().upper()
for line
in file(fname)
if line.strip()
)
for fname in (
'features.txt',
'adjustments.txt',
'internet.txt'
)

to load various set() data from text-files.
 
G

gc

Thanks for all the discussion on this. Very illuminating. Sorry for
the long delay in responding--deadlines intervened.

I will use the list comprehension syntax for the foreseeable future.

Tim, I agree with you about the slurping in final position--it's
actually quite surprising. As I'm sure you realized, that behavior
makes your 'tidier' version:

a,b,c,d,e, *more_dict_generator = (dict() for _ in itertools.count())

break with a MemoryError, which I don't think is the result that most
people would expect.
While slightly ugly, it doesn't seem ugly enough to justify the
extra complexity of special syntax for such a special case.

You're probably right (although for my coding this multiple assignment
scenario is a pretty ordinary case.) Anyway, I'll shop the a,b,c =
*dict() syntax over to python-ideas just to see what they say.

Thanks again, everyone! Happy Python.

a, b, c, d, e = [dict() for i in range(5)]
I think this is good code -- if you want five different dicts,
then you should call dict five times. Otherwise Python will
magically call your expression more than once, which isn't
very nice. And what if your datatype constructor has
side-effects?

If the side-effects are correct behavior (perhaps opening files,
network connections, or even updating a class variable) then
constructor side-effects are just doing what they're supposed to.
  E.g. something I use somewhat regularly in my code[*]:

  a,b,c,d = (file('file%i.txt', 'w') for i in range(4))

If the side-effects aren't performing the correct behavior, fix
the constructor. :)

-tkc

[*] okay, it's more like

(features,
  adjustments,
  internet,
  ) = (file(fname) for fname in (
    'features.txt',
    'adjustments.txt',
    'internet.txt'
    )

or even

(features,
  adjustments,
  internet,
  ) = (
    set(
      line.strip().upper()
      for line
      in file(fname)
      if line.strip()
      )
    for fname in (
    'features.txt',
    'adjustments.txt',
    'internet.txt'
    )

to load various set() data from text-files.
 
M

Martin P. Hellwig

On 03/08/2011 02:45, gc wrote:
a,b,c,d,e = *dict()

where * in this context means something like "assign separately to
all.
Any thoughts? Thanks!

Well got a thought but I am afraid it is the opposite of helpful in the
direct sense. So if you don't want to hear it skip it :)

Although I can not proficiently argument it, it has a certain code smell
to it. In the sense that it could hint that there is a better more
readable way of solving that particular problem (taking in account that
the one letter labels are pure for demonstration purpose).

I would love to see an example where you would need such a construct.
 
G

gc

On 03/08/2011 02:45, gc wrote:
<snip> . . . it has a certain code smell to it. <snip>
I would love to see an example where you would need such a construct.

Perfectly reasonable request! Maybe there aren't as many cases when
multiple variables need to be initialized to the same value as I think
there are.

I'm a heavy user of collections, especially counters and defaultdicts.
One frequent pattern involves boiling (typically) SQLite records down
into Python data structures for further manipulation. (OK, arguably
that has some code smell right there--but I often have to do very
expensive analysis on large subsets with complex definitions which can
be very expensive to pull, sometimes requiring table scans over tens
of gigabytes. I *like* being able to use dicts or other structures as
a way to cache and structure query results in ways amenable to
analysis procedures, even if it doesn't impress Joe Celko.)

defaultdict(list) is a very clean way to do this. I'll often have four
or five of them collecting different subsets of a single SQL pull, as
in:

# PROPOSED SYNTAX:
all_pets_by_pet_store, blue_dogs_by_pet_store,
green_cats_by_pet_store, red_cats_and_birds_by_pet_store =
*defautdict(list)

# (Yes, indexes on color and kind would speed up this query, but the
actual fields can be way quite complex and have much higher
cardinality.)
for pet_store, pet_kind, pet_color, pet_weight, pet_height in
cur.execute("""SELECT s, k, c, w, h FROM SuperExpensivePetTable WHERE
CostlyCriterionA(criterion_basis) IN("you", "get", "the", "idea")"""):
all_pets_by_pet_store[pet_store].append(pet_weight, pet_height)
if pet_color in ("Blue", "Cyan", "Dark Blue") and pet_kind in
("Dog", "Puppy"):
blue_dogs_by_pet_store[pet_store].append(pet_weight,
pet_height)
#... and so forth

all_pets_analysis =
BootstrappedMarkovDecisionForestFromHell(all_pets_by_pet_store)
blue_dogs_analysis =
BootstrappedMarkovDecisionForestFromHell(blue_dogs_by_pet_store)
red_cats_and_birds_analysis =
BMDFFHPreyInteracton(red_cats_and_bird_by_pet_store)
#... and so forth

Point is, I'd like to be able to create such collections cleanly, and
a,b,c = *defaultdict(list) seems as clean as it gets. Plus, when I
realize I need six (or only three) it's annoying to need to change two
redundant things (i.e. both the variable names and the count.)

if tl_dr: break()

Generally speaking, when you know you need several variables of the
same (maybe complex) type, the proposed syntax lets the equals sign
fully partition the variables question (how many variables do I need,
and what are they called?) from the data structure question (what type
should the variables be?) The other syntaxes (except Tim's generator-
slurping one, which as Tim points out has its own issues) all spread
the variables question across the equals sign, breaking the analogy
with single assignment (where the form is Variable = DataStructure).
If we're allowing multiple assignment, why can't we allow some form of
Variable1, Variable2, Variable3 = DataStructure without reaching for
list comprehensions, copy-pasting and/or keeping a side count of the
variables?

if much_tl_dr: break()

Let me address one smell from my particular example, which may be the
one you're noticing. If I needed fifty parallel collections I would
not use separate variables; I've coded a ghastly defaultdefaultdict
just for this purpose, which effectively permits what most people
would express as defaultdict(defaultdict(list)) [not possible AFAIK
with the existing defaultdict class]. But for reasons of explicitness
and simplicity I try to avoid hash-tables of hash-tables (and higher
iterations). I'm not trying to use dicts to inner-platform a fully
hashed version of SQL, but to boil things down.

Also bear in mind, reading the above, that I do a lot of script-type
programming which is constantly being changed under severe time
pressure (often as I'm sitting next to my clients), which needs to be
as simple as possible, and which tech-savvy non-programmers need to
understand. A lot of my value comes from quickly creating code which
other people (usually research academics) can feel ownership over.
Python is already a great language for this, and anything which makes
the syntax cleaner and more expressive and which eliminates redundancy
helps me do my job better. If I were coding big, stable applications
where the defaultdicts were created in a header file and wouldn't be
touched again for three years, I would care less about a more awkward
initialization method.
 
M

MRAB

Perfectly reasonable request! Maybe there aren't as many cases when
multiple variables need to be initialized to the same value as I think
there are.
[snip]
As I see it, there are 2 issues:

1. Repeated evaluation of an expression: "dict()" would be evaluated as
many times as necessary. In other words, it's an unlimited generator.

2. Lazy unpacking: unpacking normally continues until the source is
exhausted, but here you want it to stop when the destination (the RHS)
is satisfied.

It just happens that in your use-case they are being used together.
 
C

Chris Angelico

Perfectly reasonable request! Maybe there aren't as many cases when
multiple variables need to be initialized to the same value as I think
there are.

Minor clarification: You don't want to initialize them to the same
value, which you can do already:

a=b=c=d=e=dict()

You want to initialize them each to a fresh evaluation of the same
expression. What you're asking for is a syntax that writes an
expression once, but evaluates it many times; I think it's going to
work out something very similar to a list comprehension (as has been
mentioned).

ChrisA
 
G

gc

Minor clarification: You don't want to initialize them to the same
value, which you can do already:

a=b=c=d=e=dict()

Right. Call the proposed syntax the "instantiate separately for each
target" operator. (It can be precisely defined as a * on the RHS of a
one-into-many assignment statement--i.e. an assignment statement with
1 object on the RHS and more than 1 on the LHS).

It has only one very modest function, which is to unpack

a, b, c, d, e = *dict()

to

a, b, c, d, e = dict(), dict(), dict(), dict(), dict()

so that you have n separate objects instead of one. If you want the
same object duplicated five times, you'd best use a=b=c=d=e=dict().
(I'd guess that 90% of the people who try the a=b=c version actually
*want* separate objects and are surprised at what they get--I made
that mistake a few times!--but changing either behavior would be a
very bad idea. This proposed syntax would be the Right Way to get
separate objects.)

Maybe this is more visibly convenient with a complex class, like

x, y, z = *SuperComplexClass(param1, param2, kwparam = "3", ...)

where you need three separate objects but don't want to duplicate the
class call (for obvious copy-paste reasons) and where bundling it in a
list comprehension:

x, y, z = [SuperComplexClass(param1, etc, ...) for _ in range(3)]

layers gunk on top of something that's already complex.
I think it's going to work out something very similar to a
list comprehension (as has been mentioned).

Right; kind of a self-limiting generator[1], although that sounds MUCH
more complex than it needs to. It's really just sugar. Not that it
this is a suggestion :) but it could easily be done with a pre-
processor. It would also be perfectly amenable to automated code
conversion (i.e. 3to2).

1. Repeated evaluation of an expression: "dict()" would be evaluated as
many times as necessary. In other words, it's an unlimited generator.
2. Lazy unpacking: unpacking normally continues until the source is
exhausted, but here you want it to stop when the destination (the RHS)
is satisfied.

Yes, this is a good way to think of it. (Although I think you meant to
type LHS, right?) * in this context would tell Python to do both of
these things at once: evaluate successively and unpack lazily to the
destination. Although it's still fully (and more simply) explainable
as sugar.

[1] Self-limiting is the key here. As above, the only proposed methods
which don't require manual tallying on the RHS are Tim's very creative
a,b,c,d,e,*scrap = (dict() for _ in range(9999)), which in this case
creates a list containing 9994 unnecessary dicts, or the purer but
Python-crashing a,b,c,d,e,*scrap = (dict() for _ in
itertools.count()). Tim's intuition, which I share, is that in both
cases *scrap should, since it's in final position, actually become the
generator-in-state-6, rather than slurping the rest into a list. One
argument for this is that the present behavior is (surprisingly,
counterintuitively) identical for list comprehensions and generator
expressions. Changing the outer parens to brackets yields the same
results. But that's a separate, more complex proposal which would need
its own use cases.
 
C

Chris Angelico

Right. Call the proposed syntax the "instantiate separately for each
target" operator.  (It can be precisely defined as a * on the RHS of a
one-into-many assignment statement--i.e. an assignment statement with
1 object on the RHS and more than 1 on the LHS).

Agreed, but there's no requirement for it to be instantiating
something (although that will be common). "dict()" is an expression
that you want to evaluate five (or however many) times. It might just
as easily be some other function call; for instance:

head1,head2,head3=file.readline()

to read three lines from a file. Or it mightn't even be a function call perse.

ChrisA
 
G

gc

(snip)
It might just
as easily be some other function call; for instance:

head1,head2,head3=file.readline()

Hm--that's interesting! OK, call it the "evaluate separately for each
target" operator.

Same benefits in this use case; if I realize that the file only has
two header lines, I can just change

head1, head2, head3 = *file.readline()

to

head1, head2 = *file.readline()

without needing to keep a RHS like = [file.readline() for _ in
range(3)] in lockstep with the number of variables I'm assigning.

Presumably this syntax should be disallowed, as a grammatical matter,
when there's a starred target (per PEP 3132/language reference 6.2).
That is,

head, *body, tail = *file.readline()

is disallowed, since it is (by definition) simply sugar for

head = file.readline()
*body = file.readline()
tail = file.readline()

and

*body = file.readline() is disallowed (see PEP 3132). (Here, of
course, you'd just want head, *body, tail = file.readlines(), which is
perfectly good code.)

PS.

Off-topic, but the *target syntax already gets into similar territory,
since

a, *b, c = itertools.count()

crashes with a MemoryError--but what else could it do? Ruling out such
infinite-assignment statements on a grammatical basis would require
solving the halting problem through static analysis, which might be a
bit inefficient :p
 
T

Terry Reedy

The issue behind this thread is that for immutable objects, binding to n
copies has the same effect as n bindings to one object (so one does not
really have to know which one is doing), whereas the two are different
for mutable objects (so one does have to know). In short, identity
matters for mutables but not for immutables. Python programmers must
learn both this and the fact that Python does not make copies unless asked.

Adding a special case exception to the latter to mask the former does
not seem like a good idea.

It has only one very modest function, which is to unpack

a, b, c, d, e = *dict()

*expression has already been proposed to generally mean what it does in
function calls -- unpack the iterator in place.

funnylist = [1,2,*dict,99,100]
# == [1,2]+list(dict)+[99,100]

would interpolate the keys of the dict into the list.

There is a tracker issue for this -- it would be a follow-on to the
addition of *traget in assignments.

In a real sense, "a,b = iterable" *already* means "a,b = *iterable". If
*iterable had been in general use from the beginning, presume the latter
is how we would write sequence unpacking for assignments.
a, b, c, d, e = dict(), dict(), dict(), dict(), dict()

*expression will not be changed in meaning to magically re-evaluate an
expression some multiple number of times according to code elsewhere.
so that you have n separate objects instead of one. If you want the
same object duplicated five times, you'd best use a=b=c=d=e=dict().

Not 'duplicated', but 'bound'.
(I'd guess that 90% of the people who try the a=b=c version actually
*want* separate objects and are surprised at what they get--I made
that mistake a few times!

Guessing that 90% of people are like you is likely to be wrong.
I think this use case (for more than 2 or 3 copies) is pretty rare for
most people.

Where many people do trip up is "array = [[0]*i]*j", expecting to get j
copies of [0]*i rather than j bindings of one object. But then, they
must have the same wrong idea that [0]*i makes i copies of 0. For
immutable 0, the misunderstanding does not matter. For mutable [0]*i, it
does. People *must* learn that sequence multiplication multiplies
bindings, not (copies of) objects. Both multiple copy problems have the
same solution:

array = [[0]*i for _ in range(j)]
a,b,c,d,e = [dict() for _ in range(5)]

The fact that the number of assignment sources (possibly after implicit
unpacking) and targets have to match, unless one uses *target, and that
both sides need to be changed if one is, is true of all assignments, not
just this rare case.
--but changing either behavior would be a
very bad idea. This proposed syntax would be the Right Way to get
separate objects.)

It would be very Wrong as it already has a very different meaning.
 
M

MRAB

Right. Call the proposed syntax the "instantiate separately for each
target" operator. (It can be precisely defined as a * on the RHS of a
one-into-many assignment statement--i.e. an assignment statement with
1 object on the RHS and more than 1 on the LHS).
I think that lazy unpacking is the more important issue because we can
replace instantiation with copying:

def copies(obj, count=None):
if count is None:
while True:
yield obj.copy()
else:
for i in range(count):
yield obj.copy()

(Should it yield deep copies, or should there be a separate deep_copies
function?)
It has only one very modest function, which is to unpack

a, b, c, d, e = *dict()

to

a, b, c, d, e = dict(), dict(), dict(), dict(), dict()
This becomes:

a, b, c, d, e = copies(dict(), 5)

With lazy unpacking it would become:

a, b, c, d, e = lazy copies(dict())

(Or whatever the syntax is.)
so that you have n separate objects instead of one. If you want the
same object duplicated five times, you'd best use a=b=c=d=e=dict().
(I'd guess that 90% of the people who try the a=b=c version actually
*want* separate objects and are surprised at what they get--I made
that mistake a few times!--but changing either behavior would be a
very bad idea. This proposed syntax would be the Right Way to get
separate objects.)

Maybe this is more visibly convenient with a complex class, like

x, y, z = *SuperComplexClass(param1, param2, kwparam = "3", ...)
x, y, z = lazy copies(SuperComplexClass(param1, etc, ...))

[snip]
 
C

Chris Angelico

x, y, z = lazy copies(SuperComplexClass(param1, etc, ...))

This assumes that you can construct it once and then copy it reliably,
which may mean that the class implement copying correctly. It also
wouldn't work with:

a, b, c, d = *random.randint(1,20)

which would roll 4d20 and get the results in separate variables. The
OP's idea of separately evaluating the expression would; but to do it
with copying would require a special "randint" object that functions
exactly as an integer but, when copied, would re-randomize.

Perhaps * is the wrong syntactic element to use. Maybe it needs a
special assignment operator:

a, b, c, d @= random.randint(1,20)

which would evaluate its left operand as a tuple of lvalues, then
evaluate its right operand once for each element in the left operand,
and assign to each element in turn. (I've no idea what form of
assignment operator would be suitable, but @= is currently illegal, so
it ought to be safe at least for discussion purposes.)

Chris Angelico
 
O

OKB (not okblacke)

gc said:
Maybe this is more visibly convenient with a complex class, like

x, y, z = *SuperComplexClass(param1, param2, kwparam = "3", ...)

where you need three separate objects but don't want to duplicate the
class call (for obvious copy-paste reasons) and where bundling it in a
list comprehension:

x, y, z = [SuperComplexClass(param1, etc, ...) for _ in range(3)]

layers gunk on top of something that's already complex.

That just seems like an odd use case to me. I rarely find myself
wanting to make exactly N copies of the same thing and assign them to
explicit names. If I'm not making just one, it's usually because
I'm making some sort of list or dict of them that will be accessed by
index (not with names like "x", "y", and "z"), in which case a list
comprehension is the right way to go.

--
--OKB (not okblacke)
Brendan Barnwell
"Do not follow where the path may lead. Go, instead, where there is
no path, and leave a trail."
--author unknown
 
E

Ethan Furman

gc said:
Target lists using comma separation are great, but they don't work
very well for this task. What I want is something like

a,b,c,d,e = *dict()

This isn't going to happen. From all the discussion so far I think your
best solution is a simple helper function (not tested):

def repeat(count_, object_, *args, **kwargs):
result = []
for _ in range(count_):
result.append(object_(*args, **kwargs))
return result

a, b, c, d, e = repeat(5, dict)

These are each new objects, so depending on the function (like the
random.rand_int example) the values may not be the same.

Oh, and I put the trailing _ on count and object to minimize possible
conflicts with keyword arguments.

~Ethan~
 
Z

Zero Piraeus

:

Off on a tangent ...

Let me address one smell from my particular example, which may be the
one you're noticing. If I needed fifty parallel collections I would
not use separate variables; I've coded a ghastly defaultdefaultdict
just for this purpose, which effectively permits what most people
would express as defaultdict(defaultdict(list)) [not possible AFAIK
with the existing defaultdict class].

Dunno if it's better than your ghastly defaultdefaultdict, but this works:
from collections import defaultdict
ddd = defaultdict(lambda: defaultdict(list))
ddd["foo"]["bar"].append("something")
ddd["foo"]["bar"] ['something']

-[]z.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,969
Messages
2,570,161
Members
46,710
Latest member
bernietqt

Latest Threads

Top