Does Python really follow its philosophy of "Readability counts"?

R

Russ P.

But you keep failing to explay why do you need it to be _part of the standard_
library (or whatever).

Technically, it doesn't need to be. But if someone proposes using
particular language for a major safety-critical project, the critical
features realistically need to be part of the standard language or at
the very least in the standard library.

The requirements are different for government-regulated code than they
are for non-critical commercial code, as well they should be. Imagine
trying to explain to the FAA that you are going to use a language that
is inappropriate by itself for a safety-critical system but will be
appropriate with the addition of third-party software. That just won't
fly.

Then again, the FAA might not approve Python for flight-critical or
safety-critical code even if it gets enforced data hiding, so perhaps
the point is moot.
If you need it in your project, _use_ it. If you don't, then don't use it.. If
_you_ need that thing you call security, just use it already and quit
complaining that we don't use it. Is there a policy in your project that you
can't use any external?

I don't recall complaining that anyone doesn't use something. In fact,
in the unlikely event that enforced data hiding is ever added to
Python, nobody would be forced to use it (except perhaps by your boss
or your customer, but that's another matter).
 
L

Luis Zarrabeitia

Technically, it doesn't need to be. But if someone proposes using
particular language for a major safety-critical project, the critical
features realistically need to be part of the standard language or at
the very least in the standard library.

I assume, then, that no safety-critical project uses any external tool for
checking anything important.
The requirements are different for government-regulated code than they
are for non-critical commercial code, as well they should be. Imagine
trying to explain to the FAA that you are going to use a language that
is inappropriate by itself for a safety-critical system but will be
appropriate with the addition of third-party software. That just won't
fly.

Then it wont fly, period.
If you start explaining that the language is inappropriate, then you've
already made the case. I would argue that the language _is_ appropriate
_because_ all your concerns can be solved. (assuming, of course, that the
theoretically-solvable concerns are can actually be solved).

But as you haven't stated yet any specific concern other than silly
locked-doors analogy and "you are crazy if you think that a nuclear blahblah
don't use private variables", this is kind of pointless.
Then again, the FAA might not approve Python for flight-critical or
safety-critical code even if it gets enforced data hiding, so perhaps
the point is moot.

Most likely. For what you've said, if the use of an external tool would make
it inappropriate, I highly doubt that they'll like an informally-developed
community-driven open source language who's developers and a good portion of
its community consider silly the idea of using enforced data-hiding for
anything other than debugging.
I don't recall complaining that anyone doesn't use something.

Well, you want it on the python compiler I use. That seems awfully close.

Funny thing... if pylint became part of the standard library, I may welcome
the change. I certainly wouldn't be complaining about it unless it was
enabled-by-default-and-can't-disable-it.
In fact,
in the unlikely event that enforced data hiding is ever added to
Python, nobody would be forced to use it (except perhaps by your
boss or your customer, but that's another matter).

No one is now. And no one is forced to not use it either (except perhaps by
your boss or your customer, but that's another matter).
 
M

Mark Wooding

Scott David Daniels said:
Nowhere in this discussion is a point that I find telling: Python's
policy of accessibility to the full data structure allows simple
implementation of debugging software, rather than the black arcana
that is the normal fare of trying to weld debuggers into the compilers.

That's a very good point, actually. It also means that, even without a
formal `debugger', you can easily write diagnostic code which dumps the
internal state of some interesting object -- say at the interactive
prompt, or in a hacky test program.

One might argue that `industrial strength' modules ought to have such
diagnostic abilities built in; but they often either don't tell you what
you actually wanted to know in your current situation, or tell you way
more than was necessary, or both. Either that, or they're just too
complicated to use.

Thank you for that observation!

-- [mdw]
 
M

Mark Wooding

Luis Zarrabeitia said:
Btw, the correctness of a program (on a turing-complete language)
cannot be statically proven. Ask Turing about it.

Be careful! Given a putative correctness-checking algorithm, there
exist programs for which the algorithm gives the wrong answer. That
doesn't necessarily mean that there isn't a useful subset of `all
programs' which can be proven correct, or even that this subset doesn't
include all `interesting' programs. Even so, actually constructing
algorithms which prove interesting things about all interesting programs
seems difficult.

Some people (let's call them `type A programmers') have decided that
they want to be assisted with writing correct programs -- to the extent
that they've chosen some correctness properties, and use a tool which
reject programs that it can't prove have those properties. Since the
tool can't work for all programs, it errs on the side of caution,
sometimes rejecting correct programs. (The properties tend to be called
`type correctness' and the tool is built into the compiler, but that's
not actually very important.)

Other people (`type B programmers') don't like having their (apparently?
possibly?) correct programs rejected. Instead, they'd rather risk
writing incorrect programs (maybe they try to minimize the risk by
thinking very hard, or by building thorough test suites) because they
find that some of the kinds of programs the tools reject are actually
interesting and useful -- or at least fun.

I think trying to persuade a type A programmer that he wants to work
like a type B programmer, or /vice versa/, is difficult, bordering on
futile. Type A stereotypes type B as a bunch of ill-disciplined
reckless hackers; type B stereotypes type A as killjoy disciplinarians.
Meeting in the middle is difficult. (`We just want to add a little
safety.' `You want to take away our freedom!' Etc., /ad nauseam/.)

On a personal note, I've written programs in lots (/lots/) of different
languages: C, Pascal, Haskell, Standard ML, Python, Perl, Lisp, Scheme,
and assembler for various processors. I always found programming in
permissive languages more enjoyable. I still love ARM assembler (though
I thought the 32-bit address space changes spoilt some of its beauty),
but I don't get to write much these days; Common Lisp is now my language
of choice, but Python comes very close. I find C too fiddly and
annoying nowadays, and its type system does an impressive job of
simultaneously being uncomfortably constraining while being too weak to
provide a satisfactory feeling of confidence in compensation. Kernighan
summed up Pascal perfectly when he said `There is no escape.' Haskell
is interesting: it can provide a surprising degree of freedom, but it
makes you work /very/ hard wrangling its type system in order to get
there; and again, I found I had most fun when I was doing extremely evil
unsafePerformIO hacking...

So, my personal plea. Writing Python is /fun/. Please let it stay that
way.

-- [mdw]
 
P

Paul Rubin

Bruno Desthuilliers said:
"dynamic" and "static" were not meant to concern typing here (or at
least not only typing).

I'm not sure what you mean by those terms then.
Haskell and MLs are indeed statically typed, but with a powerfull type
inference system, which gives great support for genericity
<ot>(hmmm... is that the appropriate word ?)</ot>

I think you mean "polymorphism"; genericity in functional programming
usually means compile time reflection about types. (It means
something different in Java or Ada).
Now these are functional languages, so the notion of "access
restriction" is just moot in this context !-)

I'm not sure what you mean by that; Haskell certainly supports access
restrictions, through its type and module systems.
Ok, I should probably have made clear I was thinking of a hi-level
dynamic _imperative_ language vs a low-level static _imperative_
language. FP is quite another world.

I'd say that Python's FP characteristics are an important part of its
expressiveness.
 
P

Paul Rubin

Luis Zarrabeitia said:
Even better. Realize that you are trying to use a tool made for
debugging, documenting, and catching some obvious errors
(private/public) for "security". Go fix Bastion, that would be your
best bet. It was yanked out of python because it was insecure, but
if someone were to fix it, I'm sure the changes would be welcomed.

Bastion appears to be fundamentally unfixable, at least in CPython.
It might be possible to revive it in PyPy.
Btw, when I was programming in .Net (long time ago), I found a
little library that I wanted to use. The catch: it wasn't
opensource,

Well, THAT's the problem with it. The issues that flowed
from that problem are simply consequences.
But I don't think it is _fully_ irrelevant. See, a thread that begun
with saying that a piece of code was hard to read has evolved to
changing the language so we could have tools to statically proving
"certain things" with the code.

Yes, being able to tell without studying 1000's of lines of code what
the code does is a readability issue. If a function says at the top
"this function returns an integer" and that assertion is verified by
the compiler (or an external tool), you now know something about the
function's return value by reading only one line. If the assertion is
not verified by a program, then you have to actually examine all the
code in the function to check that particular fact.

Of course, the very presence of such assertions and the implementation
methods necessary to make them verifiable can in some situations
complicate the code, making it less readable in other regards.
There's not a magic bullet, it's always a trade-off.
And each time a "suggestion" appears (from people that, at least at
Have pylint check if someone uses getattr in your code, at all. If pylint
doesn't do it, just grep for it and don't run that code. Simple.

It would be nice to be able to use getattr on instances of class X
while being able to verify that it is not used on instances of class Y.
That's somewhat beyond the reach of Pylint at the moment, I think.
 
P

Paul Rubin

Scott David Daniels said:
Nowhere in this discussion is a point that I find telling: Python's
policy of accessibility to the full data structure allows simple
implementation of debugging software, rather than the black arcana
that is the normal fare of trying to weld debuggers into the compilers.

Are you really saying that navigating through Python traceback and
frame objects is not equally black arcana? Is there any hope of
debuggers for CPython programs that use those interfaces working in
Jython or PyPy?

Java, at least, has a well defined and documented debugging interface
that allows access to private and protected instance variables for
debugging purposes. You can enable or disable that interface by
setting a runtime option when you start the JVM.
 
P

Paul Rubin

Mark Wooding said:
Some people (let's call them `type A programmers') have decided that
they want to be assisted with writing correct programs...
Other people (`type B programmers') don't like having their (apparently?
possibly?) correct programs rejected....
I think trying to persuade a type A programmer that he wants to work
like a type B programmer, or /vice versa/, is difficult, bordering on
futile. Type A stereotypes type B as a bunch of ill-disciplined
reckless hackers; type B stereotypes type A as killjoy disciplinarians.
Meeting in the middle is difficult. (`We just want to add a little
safety.' `You want to take away our freedom!' Etc., /ad nauseam/.)

That's an interesting analysis. You know, I think I'm really a type B
programmer, interested in type A techniques and tools for the same
reason someone who naturally sleeps late is interested in extra-loud
alarm clocks.

Also, the application area matters. There is a difference between
programming for one's own enjoyment or to do a personal task, and
writing programs whose reliability or lack of it can affect other
people's lives. I've never done any safety-critical programming but I
do a fair amount of security-oriented Internet programming. As such,
I have to always assume that my programs will be attacked by people
who are smarter than I am and know more than I do. I can't possibly
out-think them. If I don't see problems in a program, it's still
plausible that someone smarter than me will spot something I missed.
Therefore, my failure to detect the presence of problems is not
reassuring. What I want is means of verifying the absence of
problems.

Finally, your type-A / type-B comparison works best regarding programs
written by one programmer or by a few programmers who communicate
closely. I'm working on a Python program in conjunction with a bunch
of people in widely dispersed time zones, so communication isn't so
fluid, and when something changes it's not always easy to notice the
change or understand the reason and deal with it. There have been
quite a few times when some hassle would have been avoided by the
static interfaces mandated in less dynamic languages. Whether the
hassle saved would have been outweighed by the extra verbosity is not
known. Yeah, I know, more docs and tests can always help, but why not
let the computer do more of the work?
Haskell is interesting: it can provide a surprising degree of
freedom, but it makes you work /very/ hard wrangling its type system
in order to get there; and again, I found I had most fun when I was
doing extremely evil unsafePerformIO hacking...

I've found Haskell's type system to work pretty well for the
not-so-fancy things I've tried so far. It takes some study to
understand, but it's very uniform and beautiful. I'm having more
trouble controlling resource consumption of programs that are
otherwise semantically correct, a well known drawback of lazy
evaluation. The purpose of unsafePerformIO is interfacing with C
programs and importing them into Haskell as pure functions when
appropriate. Anyway, at least for me, Haskell is fascinating as an
object of study, and a lot of fun to hack with, but doesn't yet have
the creature comforts or practicality of Python, plus its steep
learning curve makes it unsuitable for projects not being developed by
hardcore nerds.

The ML family avoids some of Haskell's problems, but is generally less
advanced and moribund. Pretty soon I think we will start seeing
practical successor languages that put the best ideas together.
 
S

Steven D'Aprano

Russ P. a écrit :
(snip)

And quite a few people - most of them using Python daily - answered they
didn't wan't it.

Then they don't have to use it.

Lots of people think that double-underscore name mangling is a waste of
time: not strict enough to be useful, not open enough to be Pythonic.
Solution? Don't use double-underscore names.
 
S

Steven D'Aprano

Sorry, I didn't see the last part originally. I don't think 'outside of
class Parrot' is well-defined in Python. Does '_private' have to be a
member of 'Parrot', an instance of 'Parrot', or the calling instance of
'Parrot', before entering the calling scope, or at the time the call is
made? Since many of these can change on the fly, there's more than one
consistent interpretation to the syntax.


This is a good point. Any hypothetical move to make Python (or a Python-
like language) stricter about private/protected attributes would need to
deal with that question. I don't have to worry about that until somebody
writes a PEP :)
 
S

Steven D'Aprano

Btw, the correctness of a program (on a turing-complete language) cannot
be statically proven. Ask Turing about it.

The correctness of *all* *arbitrary* programs cannot be proven. That
doesn't mean that no programs can be proven.
 
P

Paul Rubin

Tim Rowe said:
Programs done in Ada are, by objective measures, more reliable than
those done in C and C++ (the very best released C++ programs are about
as good as the worst released Ada programs), although I've always
wondered how much of that is because of language differences and how
much is because Ada tends to be used on critical projects that also
tend to get a lot more attention to development standards.

A reliability comparison between C++ and Java might shed light on that.
 
B

Bruno Desthuilliers

Steven D'Aprano a écrit :
Then they don't have to use it.

Yes they would. Because this would become the official way to tell
what's interface and what's implementation, and *this* is the important
point.
Lots of people think that double-underscore name mangling is a waste of
time: not strict enough to be useful, not open enough to be Pythonic.
Solution? Don't use double-underscore names.

The name-mangling mechanism is actually useful when you want to make
sure some vital implementation attribute (usually of a class intented to
be extended by the library users) won't be *accidentally* overwritten.
 
B

Bruno Desthuilliers

Paul Rubin a écrit :
I'm not sure what you mean by those terms then.

Python (and some other dynamic OOPLs) allow you to dynamically add /
remove / replace attributes (inclunding methods), either on a
per-instance or per-class (at least for class-based oned) basis.
I think you mean "polymorphism";
Yeps.

genericity in functional programming
usually means compile time reflection about types. (It means
something different in Java or Ada).


I'm not sure what you mean by that; Haskell certainly supports access
restrictions, through its type and module systems.

Same word, somehow different context. What I meant was that since
functional languages are (supposedly) stateless, there's no state to
make "private".

But you're right to correct me wrt/ existing access restrictions in Haskell.
I'd say that Python's FP characteristics are an important part of its
expressiveness.

Indeed - but they do not make Python a functional language[1]. Python is
based on objects, not on functions, and quite a lot of the support for
functional programing in Python comes from the object system. Just look
how functools.partial is implemented. Yes, it could have been
implemented with a HOF and closures (and there have been such
implementations), but using a class is still the most pythonic way here.

[1] except for a very formal definition of "functional language".
 
T

Tim Rowe

Btw, the correctness of a program (on a turing-complete language) cannot be
statically proven. Ask Turing about it.

For the most safety critical of programmes, for which static proof is
required, restrictions are placed on the use of the language that
effectively mean that it is not Turing-complete. Specifically, all
loops that are required to terminate require a loop variant to be
defined. Typically the loop variant is a finite non-negative integer
that provably decreases on every pass of the loop, which makes halting
decidable.
 
S

Steven D'Aprano

Steven D'Aprano a écrit :

Yes they would. Because this would become the official way to tell
what's interface and what's implementation, and *this* is the important
point.

But if you have free access to attributes, then *everything* is interface.

The name-mangling mechanism is actually useful when you want to make
sure some vital implementation attribute (usually of a class intented to
be extended by the library users) won't be *accidentally* overwritten.

Except it doesn't. Take this simple module:

# module.py

class C(object):
__n = 3
def spam(self):
return "spam " * self.__n

class D(C):
pass

# end module.py


I have no interest in C; I may have no idea it even exists. It might be
buried deep inside the inheritance hierarchy of the class I really want,
D. So now I subclass D:
.... __n = 5
.... def ham(self):
.... return "I eat ham %d times a day" % self.__n
....Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AssertionError


And now I have accidentally broken the spam() method, due to a name clash.

Besides, double-underscore names are a PITA to work with:
.... __colour = 'blue'
.... def __str__(self):
.... return 'A %s parrot' % self.__colour
.... __repr__ = __str__
........ __colour = 'red'
....A blue parrot
 
B

Bruno Desthuilliers

Steven D'Aprano a écrit :
But if you have free access to attributes, then *everything* is interface.

Nope.


Except it doesn't.

Except it works for all real-life code I've ever seen.

(snip convoluted counter-example)


Steven, sorry for being so pragmatic, but the fact is that, from
experience (not only mine - I'm talking about thousands of man/year
experience), it JustWork(tm).
 
M

Mark Wooding

Paul Rubin said:
Also, the application area matters. There is a difference between
programming for one's own enjoyment or to do a personal task, and
writing programs whose reliability or lack of it can affect other
people's lives. I've never done any safety-critical programming but I
do a fair amount of security-oriented Internet programming.

I do quite a lot of that too. But I don't think it's necessary to have
the kinds of static guarantees that a statically-typed language provides
in order to write programs which are robust against attacks.

Many actual attacks exploit the low-level nature and lack of safety of C
(and related languages): array (e.g., buffer) overflows, integer
overflows, etc. A language implementation can foil these attacks in one
of two (obvious) ways. Firstly, by making them provably impossible --
which would lay proof obligations on the programmer to show that he
never writes beyond the bounds of an array, or that arithmetic results
are always within the prescribed bounds. (This doesn't seem practical
for most programmers.) Secondly, by introducing runtime checks which
cause the program to fail safely, either by signalling an exception or
simply terminating, when these bad things happen. In the case of array
overflows, many `safe' languages implement these runtime checks, and
they now seem to be accepted as a good idea. The case of arithmetic
errors seems less universal: Python and Lisp fail gracefully to
unbounded integers when the machine's limits are exceeded; Java and C#
silently give incorrect results[1].

Anyway, Python is exempt from these problems (assuming, at any rate,
that the implementation is solid; but we've got to start somewhere).

There's a more subtle strain of logical errors which can also be
exploited. It's possible that type errors lead to exploitable
weaknesses. I don't know of an example offhand, but it seems
conceivable that a C program has a bug where an object of one type is
passed to a function expecting an object of a different type (maybe due
to variadic argument handling, use of `void *', or a superfluous
typecast); the contents of this object cause the function to misbehave
in a manner convenient to the adversary. In Python, objects have types,
and primitive operations verify that they are operating on objects of
suitable types, signalling errors as necessary; but higher level
functions may simply assume (`duck typing') that the object conforms to
a given protocol and expecting a failure if this assumption turns out to
be false. It does seem possible that an adversary might arrange for a
different object to be passed in, which seems to obey the same protocol
but in fact misinterprets the messages. (For example, the function
expects a cleaning object, and invokes ob.polish(cup) to make the cup
become shiny; in fact, the object is a nationality detector, and returns
whether the cup is Polish; the function proceeds with a dirty cup!)
Static type systems can mitigate these sorts of `ugly duckling' attacks
somewhat, but it's not possible to do so entirely. The object in
question may in fact implement the protocol in question (implement the
interface, in Java, or be an instance of an appropriate type-class in
Haskell) but do so in an unexpected manner.

And beyond these kinds of type vulnerabilities are other mistakes which
are very unlikely to be caught by even a sophisticated type system;
e.g., a function for accepting input to a random number generator, which
actually ignores the caller's data!

[1] Here, I don't mean to suggest that the truncating behaviour of Java
or C# arithmetic can't be used intentionally. Rather, I mean that,
in the absence of such an intention, arithmetic in these languages
simply yields results which are inconsistent with the usual rules of
integer arithmetic.
Finally, your type-A / type-B comparison works best regarding programs
written by one programmer or by a few programmers who communicate
closely.

Possibly; but I think that larger groups can cooperate reasonably within
a particular style.
I'm working on a Python program in conjunction with a bunch of people
in widely dispersed time zones, so communication isn't so fluid, and
when something changes it's not always easy to notice the change or
understand the reason and deal with it.

I'll agree that dynamic languages like Python require a degree of
discipline to use effectively (despite the stereotype of type B as
ill-disciplined hackers), and that includes communicating effectively
with other developers about changes which might affect them. Statically
typed languages provide a safety-net, but not always a complete one.
One might argue that the static-typing safety-net can lead to
complacency -- a risk compensation effect. (I don't have any evidence
for this so I'm speculating rather than arguing. I'd be interested to
know whether there's any research on the subject, though.)

Even so, I don't think I'd recommend Python for a nontrivial project to
be implemented by a team of low-to-average-competence programmers.
That's not a criticism of Python: I simply don't believe in
one-size-fits-all solutions. I'd rather write in Python; I'd probably
recommend that the above team use C#. (Of course, I'd rather have one
or two highly skilled programmers and use Python, than the low-to-
average team; but industry does like its horde-of-monkeys approach.)
There have been quite a few times when some hassle would have been
avoided by the static interfaces mandated in less dynamic languages.
Whether the hassle saved would have been outweighed by the extra
verbosity is not known.

This is another question for which it'd be nice to have answers. But,
alas, we're unlikely to unless dynamic typing returns to academic
favour.
I've found Haskell's type system to work pretty well for the
not-so-fancy things I've tried so far. It takes some study to
understand, but it's very uniform and beautiful.

It can be very effective, but I think I have a dynamically-typed
brain -- I keep on running into situations where I need more and more
exotic type-system features in order to do things the way I naturally
want to. It's easier to give up and use Lisp...
I'm having more trouble controlling resource consumption of programs
that are otherwise semantically correct, a well known drawback of lazy
evaluation.

I always had difficulty curbing my (natural?) inclination towards tail-
recursion, which leads to a lot of wasted space in a normal-order
language.
The purpose of unsafePerformIO is interfacing with C programs and
importing them into Haskell as pure functions when appropriate.

Yes, that's what I was doing. (I was trying to build a crypto interface
using a C library for the underlying primitives, but it became too
unwieldy and I gave up.) In particular, I used unsafePerformIO to
provide a functional view of a hash function.
The ML family avoids some of Haskell's problems, but is generally less
advanced and moribund.

Standard ML seems dead in the water; OCaml looks like it's got some
momentum behind it, though, but isn't going in the same direction.
Pretty soon I think we will start seeing practical successor languages
that put the best ideas together.

Perhaps...

-- [mdw]
 
S

Steven D'Aprano

Steven D'Aprano a écrit :
Nope.

How could anyone fail to be convinced by an argument that detailed and
carefully reasoned?


Except it works for all real-life code I've ever seen.

(snip convoluted counter-example)

"Convoluted"? It was subclassing from a class that itself was a subclass.
This happens very frequently. The only thing that was a tiny bit unusual
was a conjunction of two accidental name clashes: the subclass happened
to accidentally have the same name as one of the superclasses, and both
of them happened to have a double-underscore attribute.

Steven, sorry for being so pragmatic, but the fact is that, from
experience (not only mine - I'm talking about thousands of man/year
experience), it JustWork(tm).

Double-underscore names are great for preventing name clashes, until it
doesn't.

This isn't something new. Others have pointed out this failure mode,
including the Timbot:

http://mail.python.org/pipermail/python-dev/2005-December/058563.html

I think the attitude towards __names illustrates a huge gulf between two
ways of programming. One school of thought tries to write programs that
can't fail, and considers failure modes to be bugs to be fixed. The other
school of thought tries to write programs that won't fail until something
unusual or unexpected happens, and behaves as if the answer to failure
modes is "if the function breaks when you do that, then don't do that".
Name mangling belongs in the second category.

The Python standard library has a split personality in that the parts of
it written in C are written so they can't fail, as much as humanly
possible. Good luck trying to get list.append() to unexpectedly fail. But
the parts written in Python are built to a much lower standard. Look how
easy it is to create an object with a ticking time bomb waiting to go off
at some later date:

You've successfully created a ConfigParser object. You'd expect it should
be safe to work with now: you haven't messed with any internals or done
anything strange, you've just given it a default value. But then, much
later:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python2.6/ConfigParser.py", line 545, in get
return self._interpolate(section, option, value, d)
File "/usr/local/lib/python2.6/ConfigParser.py", line 585, in
_interpolate
if "%(" in value:
TypeError: argument of type 'int' is not iterable


In stricter languages, particularly code with static type checking, the
attitude is "you must only provide input that meets these pre-
conditions". (Often one pre-condition will be the argument type.) But in
Python, the attitude is "you can provide any input you like, but if it
fails, don't blame me".
 
M

Mark Wooding

Steven D'Aprano said:
How could anyone fail to be convinced by an argument that detailed and
carefully reasoned?

Well, your claim /was/ just wrong. But if you want to play dumb: the
interface is what's documented as being the interface.

You can tell that your claim is simply wrong by pushing it the other
way. If everything you have free access to is interface then all
behaviour observable by messing with the things you have access to is
fair game: you can rely on cmp returning one of {-1, 0, 1} on integer
arguments, for example.

But no: the Library Reference says only that it returns a negative, zero
or positive integer, and /that/ defines the interface. Everything else
is idiosyncrasy of the implementation, allowed to change at whim.
[...]

File "/usr/local/lib/python2.6/ConfigParser.py", line 585, in
_interpolate
if "%(" in value:
TypeError: argument of type 'int' is not iterable

I'd say that this is a bug. The Library Reference says (9.2):

: `ConfigParser([defaults])'
: Derived class of `RawConfigParser' that implements the magical
: interpolation feature and adds optional arguments to the `get()'
: and `items()' methods. The values in DEFAULTS must be
: appropriate for the `%()s' string interpolation. Note that
: __NAME__ is an intrinsic default; its value is the section name,
: and will override any value provided in DEFAULTS.

The value 1 is certainly appropriate for `%()s' interpolation:

In [5]: '%(foo)s' % {'foo': 1}
Out[5]: '1'

so you've satisfied the documented preconditions.
In stricter languages, particularly code with static type checking,
the attitude is "you must only provide input that meets these pre-
conditions".

You did that. It failed anyway. Therefore it's broken. Report a bug.

-- [mdw]
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
474,299
Messages
2,571,544
Members
48,297
Latest member
MarcoStinn

Latest Threads

Top