return in loop for ?

M

Mike Meyer

Duncan Booth said:
In practice it is impossible to write code in Python (or most
languages) with only one return point from a function: any line could throw
an exception which is effectively another return point, so the cleanup has
to be done properly anyway.

This simply isn't true. Not having a return statement is no worse than
not having a goto. Well, maybe it's a little worse. A number of
languages don't have it. Rewriting my example to have a single return
is easy:

def f():
for i in range(20):
if i > 10: break
inloop() # Added for a later example
return

This isn't noticably different than the original. Of course, if you
want to *do* something after the for loop, you have to test
the conditional again (or use a flag variable):

def f():
for i in range(20):
if i > 10: break
inloop()
if not (i > 10):
afterloop()
return

In my experience, people who believe that single exit is a good
principle often believe that's true at the statement level as well, so
that "break" and "continue" in for loops are bad things. That means
you have to rewrite the above as:

def f():
for i in [j for j in range(20) if j <= 10]:
inloop()
if not (i > 10):
afterloop()
return

At this point, the argument collapses in a cloud of impracticality,
and we go back to returning from wherever we want to.

<mike
 
M

Mike Meyer

Roy Smith said:
In any case, in a language which has exceptions, it's almost impossible to
really have true SESE, since an exception could be thrown from almost
anywhere. To be fair, there are those who use this to argue that
exceptions themselves are a bad thing. In my last job, the official style
guide said to not use exceptions in C++ because they generate confusing
flow of control, but I think that's becomming the minority view these days.

I really like Eiffel's model of exception handling. Instead of having
the ability to catch exceptions at arbitrary points in your code -
which, as you point out - can lead to confusing flow of control - a
function can have an exception handler - a "retry" clause - that
handles all exceptions in that function. Further, the retry clause
does one of two things: it either starts the function over again, or
passes the exception back up the chain. I'm not sure that it stacks up
on the practicality scale, but it certainly leads to a more
comprehensible program when you are dealing with lots of exceptions.

<mike
 
M

Magnus Lycka

Mike said:
This isn't noticably different than the original. Of course, if you
want to *do* something after the for loop, you have to test
the conditional again (or use a flag variable):

def f():
for i in range(20):
if i > 10: break
inloop()
if not (i > 10):
afterloop()
return

Nope. Use for-else, like this:

def f():
for i in range(20):
if i > 10: break
inloop()
else:
afterloop()
return


In practice, a good reason to follow Knuth rather than Dijkstra,
and allow multiple exits, is that the more levels of indentation
we have, the more difficult it is to follow the code.

Flat is better than nested...

With multiple exits, we can typically often avoid nested else
blocks etc, and get a much more linear program flow, where
error cases etc lead to a premature exits, and the normal,
full flow just runs straight from top to bottom in a function.
 
S

Steven D'Aprano

That is logically equivalent to the first case, so it doesn't get you
anywhere. (Just combine A and B into a single program which invokes A
unless the input is A when it invokes B instead.)

Then there you go, there is a single program which can verify itself.

I think you are confabulating the impossibility of any program which can
verify ALL programs (including itself) with the impossibility of a program
verifying itself. Programs which operate on their own source code do not
violate the Halting Problem. Neither do programs which verify some
subset of the set of all possibly programs.
 
S

Steve Holden

Steven said:
Then there you go, there is a single program which can verify itself.

I think you are confabulating the impossibility of any program which can
verify ALL programs (including itself) with the impossibility of a program
verifying itself. Programs which operate on their own source code do not
violate the Halting Problem. Neither do programs which verify some
subset of the set of all possibly programs.
There seems to have been some misattribution in recent messages, since I
believe it was *me* who raised doubts about a program verifying itself.
This has nothing to do with the Halting Problem at all. A very simple
possible verification program is one that outputs True for any input.
This will also verify itself. Unfortunately its output will be invalid
in that and many other cases.

I maintain that we cannot rely on any program's assertions about its own
formal correctness.

regards
Steve
 
S

Steve Holden

Steven said:
Then there you go, there is a single program which can verify itself.

I think you are confabulating the impossibility of any program which can
verify ALL programs (including itself) with the impossibility of a program
verifying itself. Programs which operate on their own source code do not
violate the Halting Problem. Neither do programs which verify some
subset of the set of all possibly programs.
There seems to have been some misattribution in recent messages, since I
believe it was *me* who raised doubts about a program verifying itself.
This has nothing to do with the Halting Problem at all. A very simple
possible verification program is one that outputs True for any input.
This will also verify itself. Unfortunately its output will be invalid
in that and many other cases.

I maintain that we cannot rely on any program's assertions about its own
formal correctness.

regards
Steve
 
S

Steven D'Aprano

There seems to have been some misattribution in recent messages, since I
believe it was *me* who raised doubts about a program verifying itself.

Fair enough.
This has nothing to do with the Halting Problem at all.

On the contrary, the Halting Problem is just a sub-set of the verification
problem. Failure to halt when it is meant to is just one possible way a
program can fail verification.

On the one hand, the Halting Problem is much simpler than the question of
proving formal correctness, since you aren't concerned whether your
program actually does what you want it to do, so long as it halts.

But on the other hand, the Halting Problem is so much harder that it
actually becomes impossible -- it insists on a single algorithm which is
capable of checking every imaginable input.

Since real source code verifiers make no such sweeping claims to
perfection (or at least if they do they are wrong to do so), there is no
such proof that they are impossible. By using more and more elaborate
checking algorithms, your verifier gets better at correctly verifying
source code -- but there is no guarantee that it will be able to correctly
verify every imaginable program.

A very simple
possible verification program is one that outputs True for any input.
This will also verify itself. Unfortunately its output will be invalid
in that and many other cases.

No doubt. But slightly more complex verification programs may actually
analyse the source code of some arbitrary program, attempt to use formal
algebra and logic to prove correctness, returning True or False as
appropriate.

That program itself may have been painstakingly proven correct by teams of
computer scientists, logicians and mathematicians, after which the
correctness of its results are as certain as any computer program can be.
That is the program we should be using, not your example.

I maintain that we cannot rely on any program's assertions about its own
formal correctness.

There are two ways of understanding that statement.

If all you mean to say is "We cannot rely on any program's assertions
about any other programs formal correctness, including its own", then I
can't dispute that -- I don't know how well formal correctness checking
programs are. It is possibly, I suppose, that the state of the art merely
returns True regardless of the input, although I imagine a more realistic
estimate is that they would at least call something like pylint to check
for syntax errors, and return False if they find any.

But if you mean to say "While we can rely on the quality of correctness
checkers in general, but not when they run on their own source code" then
I think you are utterly mistaken to assume that there is some magic
quality of a verification program's own source code that prevents the
verification program working correctly on itself.

And that was my point: formal correctness checking programs will be as
good as testing themselves as they are at testing other programs. If you
trust them to check other programs, you have no reason not to trust them
to check themselves.
 
S

Steve Holden

Steven said:
There are two ways of understanding that statement.

If all you mean to say is "We cannot rely on any program's assertions
about any other programs formal correctness, including its own", then I
can't dispute that -- I don't know how well formal correctness checking
programs are. It is possibly, I suppose, that the state of the art merely
returns True regardless of the input, although I imagine a more realistic
estimate is that they would at least call something like pylint to check
for syntax errors, and return False if they find any.
No, I didn't mean to say that. Although of course there *are* issues
surrounding the semantic edge cases to do with implementation on real
hardware that do require formal theories to be limited and hedged with
restrictions due to the restrictions if the underlying hardware, for
example. Numerical analysts, though, have been well used to reasoning
about differences between the theoretical behaviour of algorithms and
the behaviour of their implementations for decades, so this is nothing new.
But if you mean to say "While we can rely on the quality of correctness
checkers in general, but not when they run on their own source code" then
I think you are utterly mistaken to assume that there is some magic
quality of a verification program's own source code that prevents the
verification program working correctly on itself.
Well naturally I can't stop you thinking that, so I won't try.
And that was my point: formal correctness checking programs will be as
good as testing themselves as they are at testing other programs. If you
trust them to check other programs, you have no reason not to trust them
to check themselves.
Your bald assertion fails to change my mind about that, but it is quite
a fine theoretical issue. For what it's worth, I *have* discussed this
issue with academic formal proof specialists, some of whom have admitted
that it's a (theoretical) problem. But in the world of the practical I
wouldn't disagree with your characterisation of the state of the art.

regards
Steve
 
D

Duncan Booth

Steven said:
Since real source code verifiers make no such sweeping claims to
perfection (or at least if they do they are wrong to do so), there is
no such proof that they are impossible. By using more and more
elaborate checking algorithms, your verifier gets better at correctly
verifying source code -- but there is no guarantee that it will be
able to correctly verify every imaginable program.
I'm sure you can make a stronger statement than your last one. Doesn't
Godel's incompleteness theorem apply? I would have thought that no matter
how elaborate the checking it is guaranteed there exist programs which are
correct but your verifier cannot prove that they are.
 
S

Sybren Stuvel

Duncan Booth enlightened us with:
I would have thought that no matter how elaborate the checking it is
guaranteed there exist programs which are correct but your verifier
cannot prove that they are.

Yep, that's correct. I thought the argument was similar to the proof
that no program (read: Turing machine) can determine whether a program
will terminate or not.

Sybren
 
S

Steven D'Aprano

I'm sure you can make a stronger statement than your last one. Doesn't
Godel's incompleteness theorem apply? I would have thought that no matter
how elaborate the checking it is guaranteed there exist programs which are
correct but your verifier cannot prove that they are.

Yes, of course. By saying "no guarantee" I was guilty of understatement:
at the very least, we can guarantee that the verifier will *not* correctly
verify every possible program. (Which of course is no different from human
programmers.)
 
S

Steven D'Aprano

Duncan Booth enlightened us with:

Yep, that's correct. I thought the argument was similar to the proof
that no program (read: Turing machine) can determine whether a program
will terminate or not.

No, that is not right -- it is easy to create a program to determine
whether *some* programs will terminate, but it is impossible to create a
program which will determine whether *all* programs will terminate.
 
M

Mike Meyer

Steven D'Aprano said:
No, that is not right -- it is easy to create a program to determine
whether *some* programs will terminate, but it is impossible to create a
program which will determine whether *all* programs will terminate.

Which means you can't create a verifier which will verify all
programs. Is there a reason to believe that you can't have a verifier
with three possible outcomes: Correct, Incorrect, and I don't know,
and it is always correct in doing so? Note that "I don't know" could
be "I ran longer than I think is reasonable and gave up trying."

<mike
 
P

Paul Rubin

Mike Meyer said:
Which means you can't create a verifier which will verify all
programs. Is there a reason to believe that you can't have a verifier
with three possible outcomes: Correct, Incorrect, and I don't know,
and it is always correct in doing so? Note that "I don't know" could
be "I ran longer than I think is reasonable and gave up trying."

It's trivial to write such a verifier, if you get my drift.
 
M

Mike Meyer

Paul Rubin said:
It's trivial to write such a verifier, if you get my drift.

Almost as cute as the simplest self-replicating shell script.

Ok, so it's possible. Are there any useful examples? Does the BCPL
type verifier count?

<mike
 
S

Steven D'Aprano

Which means you can't create a verifier which will verify all
programs.

I thought that's what I said originally *wink*
Is there a reason to believe that you can't have a verifier
with three possible outcomes: Correct, Incorrect, and I don't know,
and it is always correct in doing so? Note that "I don't know" could
be "I ran longer than I think is reasonable and gave up trying."

That seems perfectly reasonable to me.

I don't know about anyone else in this discussion, but I'm talking about
*theoretical* source code verification of formal correctness. In
*practice*, I don't know what the state of the art is, but I suspect it
will be a long, long time before it is as easy as running "import
mymodule; verify(mymodule)".
 
P

Paul Rubin

Mike Meyer said:
Almost as cute as the simplest self-replicating shell script.

Ok, so it's possible. Are there any useful examples? Does the BCPL
type verifier count?

The trivial verifier simply prints "I don't know" for EVERY program
you input. It is never wrong.

I don't know about the BCPL type verifier but every statically typed
language verifies certain assertions about the types of expressions
and this is useful. I think I heard that the Hindley-Milner algorithm
always succeeds (not sure what the conditions are for that) but that
it can take exponential time for some pathological cases. The
incompleteness theorem says there are undecidable problems in any
system complex enough to include Peano arithmetic. So if the
Hindley-Milner algorithm always succeeds, it just means that type
systems aren't complex enough to express arithmetic in.

I don't really know enough about this type stuff to discuss it
sensibly at the moment. There's a book I want to read, "Types and
Programming Languages" by Benjamin Pierce, which is supposed to
explain it pretty well. It's supposed to be excellent. But I haven't
had a chance to sit down with a copy yet.

http://www.cis.upenn.edu/~bcpierce/tapl/
 
P

Paul Rubin

Steven D'Aprano said:
I don't know what the state of the art is, but I suspect it
will be a long, long time before it is as easy as running "import
mymodule; verify(mymodule)".

Some pretty significant pieces of software have been formally verified
to meet their formal specifications. The trouble with
"verify(mymodule)" is that usually means something closer to "verify
that mymodule does what I hope it does". The computer does not yet
have a telepathy peripheral that can read your mind and figure out
what you are hoping, and people's attempts to express formally what
they are hoping are easily wrong just like programs are often wrong.
 
M

Mike Meyer

Paul Rubin said:
I don't know about the BCPL type verifier but every statically typed
language verifies certain assertions about the types of expressions
and this is useful.

BCPL is untyped. You can perform any operation on any variable. You
can use the floating point add op on a pair of ints, or a pair of
function pointers. It doesn't care - it doesn't know what types the
variables are.

The type verifier extracted type information from context, and
propogated that through the system to look for places where the
programmer applied an operation to a type for which it was
inappropriate. It found bugs in itself.
I don't really know enough about this type stuff to discuss it
sensibly at the moment. There's a book I want to read, "Types and
Programming Languages" by Benjamin Pierce, which is supposed to
explain it pretty well. It's supposed to be excellent. But I haven't
had a chance to sit down with a copy yet.

I've been looking through it. It deals with type correctness, and by
association, evalulation completion. There are some fairly complex
systems in it. He does promise to reveal the features of programming
languages that cause the programs (or their type validation) to fail
to terminate.

<mike
 
A

Alex Martelli

Paul Rubin said:
Some pretty significant pieces of software have been formally verified
to meet their formal specifications. The trouble with
"verify(mymodule)" is that usually means something closer to "verify
that mymodule does what I hope it does". The computer does not yet
have a telepathy peripheral that can read your mind and figure out
what you are hoping, and people's attempts to express formally what
they are hoping are easily wrong just like programs are often wrong.

Yep. I'm reminded of an ambulance dispatch system (read about it a long
time ago on the Risks mailing list, so I might be off on the detail, but
the gist should be correct) which was formally proven to obey a long
list of rules, one of which was "for this kind of emergency, an
ambulance must arrive within 10 minutes" (or the like).

Problem, of course, being that in formal logic this implies "if this
kind of emergency exists and no ambulance has arrived within 10 minutes,
you have an impossibility and from an impossibility ANY theorem can be
proven". So, if real-world road conditions meant the ambulance was
about to arrive in 10 minutes and 5 seconds, it might instead be
rerouted elsewhere... not having arrived in 10 minutes, a contradiction
was in the system and it could prove anything (I think that's know as
the theorem of Pseudo-Scotus). It's hard in (classic) logic to express
"this MUST be true; if it's false, ..." -- to human beings the first
"MUST" can be interpreted as "99.9% probability" or the like, so the "if
it's false" clause is meaningful ("in the extremely unlikely, but not
impossible since nothing's truly impossible in the real world, situation
that it's false, ..."), but to Aristotle's logic, if something MUST be
true, it's obviously irrelevant whatever might follow if that something
were instead to be false.


Alex
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
474,271
Messages
2,571,357
Members
48,042
Latest member
DoraMcBrie

Latest Threads

Top