DRY functions with named attributes used as default arguments

T

Tim Chase

My intent is to have a function object something like

def foo(arg1, arg2=foo.DEFAULT):
return int(do_stuff(arg1, arg2))
foo.SPECIAL = 42
foo.MONKEY = 31415
foo.DEFAULT = foo.SPECIAL

so I can call it with either

result = foo(myarg)

or

result = foo(myarg, foo.SPECIAL)

However I can't do this because foo.DEFAULT isn't defined at the
time the function is created. I'd like to avoid hard-coding
things while staying DRY, so I don't like

def foo(arg1, arg2=42)

because the default might change due to business rule changes, I
have a dangling "magic constant" and if the value of SPECIAL
changes, I have to catch that it should be changed in two places.

My current hack/abuse is to use __new__ in a class that can
contain the information:

class foo(object):
SPECIAL = 42
MONKEY = 31415
DEFAULT = SPECIAL
def __new__(cls, arg1, arg2=DEFAULT):
return int(do_stuff(arg1, arg2))

i1 = foo("spatula")
i2 = foo("tapioca", foo.MONKEY)

1) is this "icky" (a term of art ;-)
2) or is this reasonable
3) or is there a better way to do what I want?

Thanks,

-tkc
 
S

Steven D'Aprano

Tim said:
My intent is to have a function object something like

def foo(arg1, arg2=foo.DEFAULT):
return int(do_stuff(arg1, arg2))
foo.SPECIAL = 42
foo.MONKEY = 31415
foo.DEFAULT = foo.SPECIAL

What's the purpose of having both foo.SPECIAL and foo.DEFAULT?

You could always use a callable instance instead of a function, what C++
calls a functor (not to be confused with what Haskell calls a functor,
which is completely different).

class Foo:
SPECIAL = 42
MONKEY = 31215
DEFAULT = SPECIAL
def __call__(self, arg1, arg2=DEFAULT):
...

foo = Foo()
del Foo

The default value of arg2 is bound at class definition time, once. If you
prefer late binding instead of early binding, it is easy to put off the
assignment until the function is called:

def __call__(self, arg1, arg2=None):
if arg2 is None:
arg2 = self.DEFAULT
...

If None is a legitimate data value for arg2, you can create your own
sentinel:

SENTINEL = object()

and use that instead of None.


so I can call it with either

result = foo(myarg)

or

result = foo(myarg, foo.SPECIAL)

However I can't do this because foo.DEFAULT isn't defined at the
time the function is created. I'd like to avoid hard-coding
things while staying DRY, so I don't like

def foo(arg1, arg2=42)

because the default might change due to business rule changes,

If the business rule changes, you have to change foo.DEFAULT anyway. So why
not cut out the middle man and change the default argument in the function
signature?

I
have a dangling "magic constant" and if the value of SPECIAL
changes, I have to catch that it should be changed in two places.

Then put it in one place.

SPECIAL = 42

def foo(arg1, arg2=SPECIAL):
...


and avoid the reference to foo.

My current hack/abuse is to use __new__ in a class that can
contain the information:

class foo(object):
SPECIAL = 42
MONKEY = 31415
DEFAULT = SPECIAL
def __new__(cls, arg1, arg2=DEFAULT):
return int(do_stuff(arg1, arg2))

i1 = foo("spatula")
i2 = foo("tapioca", foo.MONKEY)

1) is this "icky" (a term of art ;-)
2) or is this reasonable

Seems okay to me. A little unusual, but only a little, not "WTF is this code
doing???" territory.
 
T

Tim Chase

What's the purpose of having both foo.SPECIAL and foo.DEFAULT?

As you later ask...
If the business rule changes, you have to change foo.DEFAULT
anyway. So why not cut out the middle man and change the
default argument in the function signature?

By indirecting through DEFAULT, I can change DEFAULT to point at
another behavior-tweaking option in one place ("DEFAULT =
SPECIAL") rather than in multiple places. However, I can't give
a very good argument for just using

def foo(arg1, arg2=SPECIAL)

and then, if it changes, just change *that* one location to

def foo(arg1, arg2=MONKEY)

because, well, Python calls them default arguments for a reason :)
class Foo:
SPECIAL = 42
MONKEY = 31215
DEFAULT = SPECIAL
def __call__(self, arg1, arg2=DEFAULT):
...

foo = Foo()
del Foo

I did consider this (sorry I forgot to mention it) and it works
well too. It's a little cleaner, as the magic happens in
something named __call__ which is more obvious than overloading
odd behavior into __new__. The instantiate-and-delete-the-class
felt a little weird, and having both the class and the instance
in the namespace felt weird. Granted the (ab)use of __new__ felt
weird too, so neither wins by great margin. Which is part of my
question: what's the least-worst way to do this? :)
Then put it in one place.

SPECIAL = 42

def foo(arg1, arg2=SPECIAL):
...

and avoid the reference to foo.

I guess part of my attempt was to keep from littering the module
namespace with things that only apply to the one function (and
this also applies to C-like prefixes such as FOO_SPECIAL)
Seems okay to me. A little unusual, but only a little, not
"WTF is this code doing???" territory.

The code felt like it was riding the WTF-boundary, so I find your
evaluation of "unusual but not WTF" encouraging.

Thanks for your thoughts,

-tkc
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,982
Messages
2,570,189
Members
46,735
Latest member
HikmatRamazanov

Latest Threads

Top