Newbie: static typing?

R

Rui Maciel

Is there any pythonic way to perform static typing? After searching the web
I've stumbled on a significant number of comments that appear to cover
static typing as a proof of concept , but in the process I've found no
tutorial on how to implement it.

Does anyone care to enlighten a newbie?


Thanks in advance,
Rui Maciel
 
G

Gary Herron

Is there any pythonic way to perform static typing? After searching the web
I've stumbled on a significant number of comments that appear to cover
static typing as a proof of concept , but in the process I've found no
tutorial on how to implement it.

Does anyone care to enlighten a newbie?


Thanks in advance,
Rui Maciel

The Pythonic way is to *enjoy* the freedom and flexibility and power of
dynamic typing. If you are stepping out of a static typing language
into Python, don't step just half way. Embrace dynamic typing. (Like
any freedom, it can bite you at times, but that's no reason to hobble
Python with static typing.)


Python is both dynamically typed and strongly typed. If you are
confusing dynamic/static typing with week/strong typing, see
http://wiki.python.org/moin/Why is Python a dynamic language and also a strongly typed language

Gary Herron
 
I

Ian Kelly

Is there any pythonic way to perform static typing? After searching the web
I've stumbled on a significant number of comments that appear to cover
static typing as a proof of concept , but in the process I've found no
tutorial on how to implement it.

Does anyone care to enlighten a newbie?

Python 3 has support for function annotations, but it leaves it
entirely up to the user how they wish to use these annotations (if at
all). In theory, a Python IDE could use function annotations to
perform static type checking, but I am not aware of any IDE that has
actually implemented this.
 
S

Steven D'Aprano

Is there any pythonic way to perform static typing? After searching the
web I've stumbled on a significant number of comments that appear to
cover static typing as a proof of concept , but in the process I've
found no tutorial on how to implement it.

Try Cobra instead. It's Python-like syntax, but allows static typing.

http://cobra-language.com/
 
R

Rui Maciel

Ben said:
I think no; static typing is inherently un-Pythonic.

Python provides strong, dynamic typing. Enjoy it!
Bummer.



Is there some specific problem you think needs static typing? Perhaps
you could start a new thread, giving an example where you are having
trouble and you think static typing would help.

It would be nice if some functions threw an error if they were passed a type
they don't support or weren't designed to handle. That would avoid having
to deal with some bugs which otherwise would never happen.

To avoid this sort of error, I've been testing arguments passed to some
functions based on their type, and raising TypeError when necessariy, but
surely there must be a better, more pythonic way to handle this issue.


Rui Maciel
 
R

Rui Maciel

Gary said:
The Pythonic way is to *enjoy* the freedom and flexibility and power of
dynamic typing. If you are stepping out of a static typing language
into Python, don't step just half way. Embrace dynamic typing. (Like
any freedom, it can bite you at times, but that's no reason to hobble
Python with static typing.)


What's the Python way of dealing with objects being passed to a function
that aren't of a certain type, have specific attributes of a specific type,
nor support a specific interface?


Rui Maciel
 
J

Joshua Landau


It's really not.

It would be nice if some functions threw an error if they were passed a type
they don't support or weren't designed to handle. That would avoid having
to deal with some bugs which otherwise would never happen.

To avoid this sort of error, I've been testing arguments passed to some
functions based on their type, and raising TypeError when necessariy, but
surely there must be a better, more pythonic way to handle this issue.

Unless you have a very good reason, don't do this. It's a damn pain
when functions won't accept my custom types with equivalent
functionality -- Python's a duck-typed language and it should behave
like one.
 
S

Steven D'Aprano

What's the Python way of dealing with objects being passed to a function
that aren't of a certain type, have specific attributes of a specific
type, nor support a specific interface?

Raise TypeError, or just let the failure occurs however it occurs,
depending on how much you care about early failure.

Worst:

if type(obj) is not int:
raise TypeError("obj must be an int")


Better, because it allows subclasses:

if not isinstance(obj, int):
raise TypeError("obj must be an int")


Better still:

import numbers
if not isinstance(obj, numbers.Integer):
raise TypeError("obj must be an integer number")



All of the above is "look before you leap". Under many circumstances, it
is "better to ask for forgiveness than permission" by just catching the
exception:

try:
flag = obj & 8
except TypeError:
flag = False


Or just don't do anything:


flag = obj & 8


If obj is the wrong type, you will get a perfectly fine exception at run-
time. Why do extra work do try to detect the failure when Python will do
it for you?
 
J

Joshua Landau

What's the Python way of dealing with objects being passed to a function
that aren't of a certain type, have specific attributes of a specific type,
nor support a specific interface?

What's the actual problem you're facing? Where do you feel that you
need to verify types?
 
C

Chris Angelico

It would be nice if some functions threw an error if they were passed a type
they don't support or weren't designed to handle. That would avoid having
to deal with some bugs which otherwise would never happen.

To avoid this sort of error, I've been testing arguments passed to some
functions based on their type, and raising TypeError when necessariy, but
surely there must be a better, more pythonic way to handle this issue.

def add_three_values(x,y,z):
return x+y+z

Do you want to test these values for compatibility? Remember, you
could take a mixture of types, as most of the numeric types can safely
be added. You can also add strings, or lists, but you can't mix them.
And look! It already raises TypeError if it's given something
unsuitable:
add_three_values(1,"foo",[4,6])
Traceback (most recent call last):
File "<pyshell#28>", line 1, in <module>
add_three_values(1,"foo",[4,6])
File "<pyshell#25>", line 2, in add_three_values
return x+y+z
TypeError: unsupported operand type(s) for +: 'int' and 'str'

The Pythonic way is to not care what the objects' types are, but to
simply use them.

In C++ and Java, it's usually assumed that the person writing a
function/class is different from the person writing the code that uses
it, and that each needs to be protected from each other. In Python,
it's assumed that either you're writing both halves yourself, or at
least you're happy to be aware of the implementation on the other
side. It saves a HUGE amount of trouble; for instance, abolishing
private members makes everything easier. This philosophical difference
does take some getting used to, but is so freeing. The worst it can do
is give you a longer traceback when a problem is discovered deep in
the call tree, and if your call stack takes more than a page to
display, that's code smell for another reason. (I've seen Ruby
tracebacks that are like that. I guess Ruby programmers get used to
locating the important part.)

ChrisA
 
R

Rui Maciel

Joshua said:
Unless you have a very good reason, don't do this. It's a damn pain
when functions won't accept my custom types with equivalent
functionality -- Python's a duck-typed language and it should behave
like one.

In that case what's the pythonic way to deal with standard cases like this
one?

<code>
class SomeModel(object):
def __init__(self):
self.label = "this is a label attribute"

def accept(self, visitor):
visitor.visit(self)
print("visited: ", self.label)


class AbstractVisitor(object):
def visit(self, element):
pass


class ConcreteVisitorA(AbstractVisitor):
def visit(self, element):
element.label = "ConcreteVisitorA operated on this model"

class ConcreteVisitorB(AbstractVisitor):
def visit(self, element):
element.label = "ConcreteVisitorB operated on this model"


model = SomeModel()

operatorA = ConcreteVisitorA()

model.accept(operatorA)

operatorB = ConcreteVisitorB()

model.accept(operatorA)

not_a_valid_type = "foo"

model.accept(not_a_valid_type)
</python>


Rui Maciel
 
R

Rui Maciel

Joshua said:
What's the actual problem you're facing? Where do you feel that you
need to verify types?

A standard case would be when there's a function which is designed expecting
that all operands support a specific interface or contain specific
attributes.

In other words, when passing an unsupported type causes problems.


Rui Maciel
 
R

Rui Maciel

Chris said:
def add_three_values(x,y,z):
return x+y+z

Do you want to test these values for compatibility? Remember, you
could take a mixture of types, as most of the numeric types can safely
be added. You can also add strings, or lists, but you can't mix them.
And look! It already raises TypeError if it's given something
unsuitable:

If the type problems aren't caught right away when the invalid types are
passed to a function then the problem may only manifest itself in some far
away point in the code, making this bug needlessly harder to spot and fix,
and making the whole ordeal needlessly too time consuming.


Rui Maciel
 
C

Chris Angelico

If the type problems aren't caught right away when the invalid types are
passed to a function then the problem may only manifest itself in some far
away point in the code, making this bug needlessly harder to spot and fix,
and making the whole ordeal needlessly too time consuming.

There are two problems that can result from not checking:

1) The traceback will be deeper and may be less clear.

2) Some code will be executed and then an exception thrown.

If #2 is a problem, then you write checks in. (But be aware that
exceptions can be thrown from all sorts of places. It's usually better
to write your code to cope with exceptions than to write it to check
its args.) But that's a highly unusual case. With >99% of Python
scripts, it won't matter; programming errors are corrected by editing
the source and rerunning the program from the top. (There ARE
exceptions to this, but in Python they're relatively rare. In some of
my serverside programming (usually in Pike), I have to code in *much*
better protection than this. But if you're doing that sort of thing,
you'll know.)

So the real problem here is that, when there's a bug, the traceback is
longer and perhaps unclear. This is at times a problem, but it's not
as big a problem as the maintenance burden of all those extra type
checks. You might have a bug that takes you an extra few minutes to
diagnose because it's actually caused half way up the call stack (or,
worse, it doesn't come up in testing at all and it slips through into
production), but you save hours and hours of fiddling with the type
checks, and perhaps outright fighting them when you want to do
something more unusual. Or you write six versions of a function, with
different type checking. Any of these scenarios is, in my opinion, far
worse than the occasional bit of extra debugging work.

Like everything, it's a tradeoff. And if your function signatures are
sufficiently simple, you won't often get the args wrong anyway.

ChrisA
 
B

Burak Arslan

A standard case would be when there's a function which is designed expecting
that all operands support a specific interface or contain specific
attributes.

In other words, when passing an unsupported type causes problems.

Hi,

First, let's get over the fact that, with dynamic typing, code fails at
runtime. Irrespective of language, you just shouldn't ship untested
code, so I say that's not an argument against dynamic typing.

This behaviour is only a problem when code fails *too late* into the
runtime -- i.e. when you don't see the offending value in the stack trace.

For example, consider you append values to a list and the values in that
list get processed somewhere else. If your code fails because of an
invalid value, your stack trace is useless, because that value should
not be there in the first place. The code should fail when appending to
that list and not when processing it.

The "too late" case is a bit tough to illustrate. This could be a rough
example: https://gist.github.com/plq/6163839 Imagine that the list there
is progressively constructed somewhere else in the code and later
processed by the sq_all function. As you can see, the stack trace is
pretty useless as we don't see how that value got there.

In such cases, you do need manual type checking.

Yet, as someone else noted, naively using isinstance() for type checking
breaks duck typing. So you should read up on abstract base classes:
http://docs.python.org/2/glossary.html#term-abstract-base-class

These said, I've been writing Python for several years now, and I only
needed to resort to this technique only once. (i was working on a
compiler) Most of the time, you'll be just fine without any manual type
checking.

Best regards,
Burak
 
A

Antoon Pardon

Op 06-08-13 15:27, Burak Arslan schreef:
Hi,

First, let's get over the fact that, with dynamic typing, code fails at
runtime. Irrespective of language, you just shouldn't ship untested
code, so I say that's not an argument against dynamic typing.

Why not? Can ease of development not be a consideration? So if some
kind of faults are easier to detect at compile time if you have static
typing than if you have to design a test for them, I don't see why that
can't be an argument.
 
E

Eric S. Johansson

First, let's get over the fact that, with dynamic typing, code fails at
runtime. Irrespective of language, you just shouldn't ship untested
code, so I say that's not an argument against dynamic typing.

It's not so much shipping untested code as not having or unable to test
all the pathways in the code shipped. I ran into this problem with a
server I built. I ended up solving the problem by building a testing
scaffolding that let me control all inputs. It would've been much easier
with static typing to make sure all the pieces lined up.

The other technique I've used is a properly set up exception handling
environment. Do it right and you can log all of the errors so that you
have useful information. Part of "doing it right" includes a system that
tells you when exceptions happened right away so the server doesn't run
for days or more failing at random but nobody notices because your
exceptions keep the system for failing completely.

I guess this is a long way of saying instrument your software so that it
can be tested and or give you enough information about the internal state.
This is sort of like building a specialized integrated circuit. You need
to design it so it can be tested/observed after it's been embedded in
epoxy and not just count on being able to probe the wafer in the lab.
 
C

Chris Angelico

Op 06-08-13 15:27, Burak Arslan schreef:

Why not? Can ease of development not be a consideration? So if some
kind of faults are easier to detect at compile time if you have static
typing than if you have to design a test for them, I don't see why that
can't be an argument.

Sure, which is why I like working in Pike, which does have static type
declarations (when you want them; they can get out the way when you
don't). But there will always be, regardless of your language,
criteria that static typing cannot adequately handle, so just write
your code to cope with exceptions - much easier. If the exception's
never thrown, the bug can't be all that serious; otherwise, just deal
with it when you find it, whether that be in initial testing or years
later in production. There WILL BE such errors - that's a given. Deal
with them, rather than trying to eliminate them.

ChrisA
 
R

Rotwang

Joshua said:
Unless you have a very good reason, don't do this [i.e. checking
arguments for type at runtime and raising TypeError]. It's a damn pain
when functions won't accept my custom types with equivalent
functionality -- Python's a duck-typed language and it should behave
like one.

In that case what's the pythonic way to deal with standard cases like this
one?

<code>
class SomeModel(object):
def __init__(self):
self.label = "this is a label attribute"

def accept(self, visitor):
visitor.visit(self)
print("visited: ", self.label)


class AbstractVisitor(object):
def visit(self, element):
pass


class ConcreteVisitorA(AbstractVisitor):
def visit(self, element):
element.label = "ConcreteVisitorA operated on this model"

class ConcreteVisitorB(AbstractVisitor):
def visit(self, element):
element.label = "ConcreteVisitorB operated on this model"


model = SomeModel()

operatorA = ConcreteVisitorA()

model.accept(operatorA)

operatorB = ConcreteVisitorB()

model.accept(operatorA)

not_a_valid_type = "foo"

model.accept(not_a_valid_type)
</python>

The Pythonic way to deal with it is exactly how you deal with it above.
When the script attempts to call model.accept(not_a_valid_type) an
exception is raised, and the exception's traceback will tell you exactly
what the problem was (namely that not_a_valid_type does not have a
method called "visit"). In what way would runtime type-checking be any
better than this? There's an obvious way in which it would be worse,
namely that it would prevent the user from passing a custom object to
SomeModel.accept() that has a visit() method but is not one of the types
for which you thought to check.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,968
Messages
2,570,150
Members
46,697
Latest member
AugustNabo

Latest Threads

Top