Everyone seems to be suggesting that the fix for the problem
is to somehow cobble together some way of forcing an unsigned
integer into a signed integer (what you would do with a cast
in C). However, if I understand the long<->int consolidation
this is not consistent with that effort.
As far as I can tell, the underlying problem is that the C
routine fcntl.ioctl is expecting a signed integer.
Well, that's what the man page says.
In practice it's just expecting an int-sized chunk of bits: it
wants a unique bit pattern to feed to a 'case' statement rather
than an "integer" in the number-line, arithmetic operations
sense of the word. C's implicit coercion rules make it a moot
point, but Python's coercion rules have been changed to be
incompatible with C's. Hilarity ensues.
These are the kinds of problems that need to be fixed. The
function should be asking for an unsigned integer. This is
possible with the C API at least since Python 2.3. Without
these fixes, the long<->int consolidation is going to continue
to produce frustration. There are many functions in the
standard library that you would expect to take unsigned
integers but actually specify signed integers.
Unfortunately the C API is cast in stone (at least in
comparison to Python standards). I guess somebody could go
through the C-Python code and "lie" to Python about it in order
to fix some of the issues.
What I would really, really like are fixed length integer types
so that I can manipulate 8, 16, 32 and maybe 64 bit, 2's
compliment values. I've seen some pretty good "user-space"
pure-python implimentations, but haven't gotten around to using
them in production yet.
One of the nasty bits in a pure-python approach is that there's
no way to write a literal with a fixed length. For example,
instead of writing 0xf7 to get an 8-bit value and 0x12345789 to
get a 32-bit value, you have to instantiate a class like
Word8(0xf7) and Word32(0x12345678).
That starts to make things pretty hard to read.