C
Chris Torek
Exceptions are great, but...
Sometimes when calling a function, you want to catch some or
even all the various exceptions it could raise. What exceptions
*are* those?
It can be pretty obvious. For instance, the os.* modules raise
OSError on errors. The examples here are slightly silly until
I reach the "real" code at the bottom, but perhaps one will get
the point:
...
[I'm not sure why the interpreter wants more after my comment here.]
Traceback (most recent call last):
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
OSError: [Errno 3] No such process
So now I am ready to write my "is process <pid> running" function:
import os, errno
def is_running(pid):
"Return True if the given pid is running, False if not."
try:
os.kill(pid, 0)
except OSError, err:
# We get an EPERM error if the pid is running
# but we are not allowed to signal it (even with
# signal 0). If we get any other error we'll assume
# it's not running.
if err.errno != errno.EPERM:
return False
return True
This function works great, and never raises an exception itself.
Or does it?
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 3, in is_running
OverflowError: long int too large to convert to int
Oops! It turns out that os.kill() can raise OverflowError (at
least in this version of Python, not sure what Python 3.x does).
Now, I could add, to is_running, the clause:
except OverflowError:
return False
(which is what I did in the real code). But how can I know a priori
that os.kill() could raise OverflowError in the first place? This
is not documented, as far as I can tell. One might guess that
os.kill() would raise TypeError for things that are not integers
(this is the case) but presumably we do NOT want to catch that
here. For the same reason, I certainly do not want to put in a
full-blown:
except Exception:
return False
It would be better just to note somewhere that OverflowError is
one of the errors that os.kill() "normally" produces (and then,
presumably, document just when this happens, so although having
noted that it can, one could make an educated guess).
Functions have a number of special "__" attributes. I think it
might be reasonable to have all of the built-in functions, at least,
have one more, perhaps spelled __exceptions__, that gives you a
tuple of all the exceptions that the function might raise.
Imagine, then:
'kill(pid, sig)\n\nKill a process with a signal.'
[this part exists]
(<type 'exceptions.OSError'>, <type 'exceptions.TypeError'>, <type 'exceptions.OverflowError'>, <type 'exceptions.DeprecationWarning'>)
[this is my new proposed part]
With something like this, a pylint-like tool could compute the
transitive closure of all the exceptions that could occur in any
function, by using __exceptions__ (if provided) or recursively
finding exceptions for all functions called, and doing a set-union.
You could then ask which exceptions can occur at any particular
call site, and see if you have handled them, or at least, all the
ones you intend to handle. (The DeprecationWarning occurs if you
pass a float to os.kill() -- which I would not want to catch.
Presumably the pylint-like tool, which might very well *be* pylint,
would have a comment directive you would put in saying "I am
deliberately allowing these exceptions to pass on to my caller",
for the case where you are asking it to tell you which exceptions
you may have forgotten to catch.)
User functions could set __exceptions__ for documentation purposes
and/or speeding up this pylint-like tool. (Obviously, user-provided
functions might raise exception classes that are only defined in
user-provided code -- but to raise them, those functions have to
include whatever code defines them, so I think this all just works.)
The key thing needed to make this work, though, is the base cases
for system-provided code written in C, which pylint by definition
cannot inspect to find a set of exceptions that might be raised.
Sometimes when calling a function, you want to catch some or
even all the various exceptions it could raise. What exceptions
*are* those?
It can be pretty obvious. For instance, the os.* modules raise
OSError on errors. The examples here are slightly silly until
I reach the "real" code at the bottom, but perhaps one will get
the point:
...
[I'm not sure why the interpreter wants more after my comment here.]
Traceback (most recent call last):
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
OSError: [Errno 3] No such process
So now I am ready to write my "is process <pid> running" function:
import os, errno
def is_running(pid):
"Return True if the given pid is running, False if not."
try:
os.kill(pid, 0)
except OSError, err:
# We get an EPERM error if the pid is running
# but we are not allowed to signal it (even with
# signal 0). If we get any other error we'll assume
# it's not running.
if err.errno != errno.EPERM:
return False
return True
This function works great, and never raises an exception itself.
Or does it?
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 3, in is_running
OverflowError: long int too large to convert to int
Oops! It turns out that os.kill() can raise OverflowError (at
least in this version of Python, not sure what Python 3.x does).
Now, I could add, to is_running, the clause:
except OverflowError:
return False
(which is what I did in the real code). But how can I know a priori
that os.kill() could raise OverflowError in the first place? This
is not documented, as far as I can tell. One might guess that
os.kill() would raise TypeError for things that are not integers
(this is the case) but presumably we do NOT want to catch that
here. For the same reason, I certainly do not want to put in a
full-blown:
except Exception:
return False
It would be better just to note somewhere that OverflowError is
one of the errors that os.kill() "normally" produces (and then,
presumably, document just when this happens, so although having
noted that it can, one could make an educated guess).
Functions have a number of special "__" attributes. I think it
might be reasonable to have all of the built-in functions, at least,
have one more, perhaps spelled __exceptions__, that gives you a
tuple of all the exceptions that the function might raise.
Imagine, then:
'kill(pid, sig)\n\nKill a process with a signal.'
[this part exists]
(<type 'exceptions.OSError'>, <type 'exceptions.TypeError'>, <type 'exceptions.OverflowError'>, <type 'exceptions.DeprecationWarning'>)
[this is my new proposed part]
With something like this, a pylint-like tool could compute the
transitive closure of all the exceptions that could occur in any
function, by using __exceptions__ (if provided) or recursively
finding exceptions for all functions called, and doing a set-union.
You could then ask which exceptions can occur at any particular
call site, and see if you have handled them, or at least, all the
ones you intend to handle. (The DeprecationWarning occurs if you
pass a float to os.kill() -- which I would not want to catch.
Presumably the pylint-like tool, which might very well *be* pylint,
would have a comment directive you would put in saying "I am
deliberately allowing these exceptions to pass on to my caller",
for the case where you are asking it to tell you which exceptions
you may have forgotten to catch.)
User functions could set __exceptions__ for documentation purposes
and/or speeding up this pylint-like tool. (Obviously, user-provided
functions might raise exception classes that are only defined in
user-provided code -- but to raise them, those functions have to
include whatever code defines them, so I think this all just works.)
The key thing needed to make this work, though, is the base cases
for system-provided code written in C, which pylint by definition
cannot inspect to find a set of exceptions that might be raised.