Non-constant constant strings

R

Rick C. Hodgin

This sounds like "test-driven development", which is popular in many
circles. AIUI, the basic idea is that you start by writing unit tests
to check whether the software complies with the specified requirements,
and then you code and debug the actual software until it passes those
tests, which (if done correctly) means you're done.

Stephen, thank you. Now that you describe it this way I do remember the
idea. IIRC, I remember also "contract programming" where one arranges the
contracts between disparate system components. So long as each component
is fulfilling their end of the contract, everything is supposed to work
correctly.

I have considered introducing some of these concepts into my IDE. We'll
see. Still a ways off in my list of goals (about 9th on the list).

Best regards,
Rick C. Hodgin
 
J

James Kuyper

And is sort of ridiculous, as obviously at least some of the "regs" use
debuggers at least some of the time.

There are two groups of regulars, with almost no overlap:
A: the people referred to as "regs"
B: the people who use that term to refer to members of the first group.
Therefore, membership in group A is determined exclusively by members of
group B. Competence and rationality seem to the main membership
requirement, but that's not enough to explain all of the people who are
and are not considered to be "regs".

I can remember only one person who's claimed in this newsgroup to have
never used a debugger, though there may have been others. Most of the
members of group A would say, and have said, that "debuggers can be
useful when debugging a program, but are not essential", though with a
lot of differences of opinion about how useful debuggers can be. For
some reason, group B has trouble understanding such sentences as written
- they usually misinterpret them as meaning "I never use debuggers."

Some members of group B have asserted that it is impossible to debug a
program without using a debugger; therefore, anyone who has ever claimed
to do so must have been lying. They then repeat the assertion that "regs
never use debuggers" as evidence that we are not merely stupid, but also
liars.
 
J

James Kuyper

On 01/26/2014 04:37 PM, Rick C. Hodgin wrote:
....
Are you suggesting I write a unit test (or unit tests) before I even have
the algorithm I'll be testing coded or debugged?

Not quite. Unit test code should generally be written after the code
unit it is testing, but the first draft of your unit test plan should be
written before you write the first line of code.

This is a special case of a more general rule: the best time to write
the first draft of any document associated with your code is before you
write the first line of code. You should start with the user's guide,
and then work your way down from high-level design to low-level design
to test plans. Those documents will need to be updated as you learn
things during development; the last draft might not occur until after
the last change to any line of code; but he first draft should occur as
early as possible. That allows you to catch design errors early, when
it's less expensive to fix them.
 
D

David Brown

(Sorry about the blank post.)

His hair is probably good so far as Leviticus 19:27 in concerned...

Yes, but there is a note in Corinthians (IIRC) about men having long
hair being "shameful"...
 
R

Rick C. Hodgin

Gosh, inconsistent rules? ;-)

I think we can explain it away as a difference between hair and
sideburns...

Covenants. Old Testament. New Testament. The old was based on the law.
The new is based upon grace, being led by God's Holy Spirit given unto
men after Pentecost (40 days after Jesus' resurrection).

Old = B.C. (Before Christ)
New = A.D. (Ano Domini, "In the year of the Lord")

Jesus divided time, with the new covenant in His blood.

Best regards,
Rick C. Hodgin
 
G

glen herrmannsfeldt

James Kuyper said:
On 01/26/2014 02:01 PM, Seebs wrote:
(snip)
There are two groups of regulars, with almost no overlap:
A: the people referred to as "regs"
B: the people who use that term to refer to members of the first group.
Therefore, membership in group A is determined exclusively by members
of group B. Competence and rationality seem to the main membership
requirement, but that's not enough to explain all of the people who
are and are not considered to be "regs".
I can remember only one person who's claimed in this newsgroup to have
never used a debugger, though there may have been others. Most of the
members of group A would say, and have said, that "debuggers can be
useful when debugging a program, but are not essential", though with a
lot of differences of opinion about how useful debuggers can be. For
some reason, group B has trouble understanding such sentences as written
- they usually misinterpret them as meaning "I never use debuggers."

Hmm. I am not sure which group I am. A large fraction of the time if
I know which statement the program failed in, I can figure it out.
(Especially in Java, where subscript bounds checking is required.
In C, the effect sometimes takes much longer.)

Many years ago, I remember debugging in ORVYL, and even finding a
bug in the debugger. The IBM S/370 PER (Program Event Recording)
feature is very convenient for debugging. Programs can ask for an
interrupt on fetch or store within a specified address range,
modification of a specified register, and more. The bug happened
when putting a breakpoint on a BCR 15,0 (branch to the address in
register 0) instruction.

Does using adb or gdb to find the address/statement where a program
dies count as "using" the debugger? Does adding printf() calls at
appropriate points count as not using a debugger?
Some members of group B have asserted that it is impossible to debug
a program without using a debugger; therefore, anyone who has ever
claimed to do so must have been lying. They then repeat the assertion
that "regs never use debuggers" as evidence that we are not merely
stupid, but also liars.

-- glen
 
G

glen herrmannsfeldt

(snip)
The idea of free/libre software is a correct one. It relates back to what
God gave us in the way of the things of His creation. He gave us apples,
with seeds inside, each of which can grow a new apple tree.

Reminds me of a story I heard not so long ago.

It seems that apple genetics is unusual, in that the tree you get
from planting a seed isn't so similar to the apple that it came from
as you might expect. That apple trees that you buy, or that apple
farmers grow, are done with grafting.

Seems that planting seeds is fine if you want sour apple, good for
making hard cider, but not so good for sweet eating apples.
Now, just what was Johnny Appleseed trying to grow?

-- glen
 
S

Seebs

At least, that's my opinion. That long-ago discussion about debuggers
did, however, include some disagreements about what "debugger" actually
means. As I understand it, a debugger is a separate piece of software
that allows you to monitor the state of a program. Code added to your
own program is a common and powerful debugging technique, but doesn't
qualify as creating a full-fledged debugger.

In general, I prefer logging to debuggers, but there are certainly times
when only a debugger will get you the information you need, because it
involves changes that go away if the code tries to inspect them. e.g.,
things like floating point values being in registers rather than in
higher-precision intermediate computations.

-s
 
J

James Kuyper

On 01/27/2014 12:59 PM, glen herrmannsfeldt wrote:
....
Does using adb or gdb to find the address/statement where a program
dies count as "using" the debugger? ...
Yes.

... Does adding printf() calls at
appropriate points count as not using a debugger?

In itself, yes, though nothing prevents the combination of debugging
printf()s and running an actual debugger.

At least, that's my opinion. That long-ago discussion about debuggers
did, however, include some disagreements about what "debugger" actually
means. As I understand it, a debugger is a separate piece of software
that allows you to monitor the state of a program. Code added to your
own program is a common and powerful debugging technique, but doesn't
qualify as creating a full-fledged debugger.
 
I

Ian Collins

James said:
On 01/26/2014 04:37 PM, Rick C. Hodgin wrote:
....

Not quite. Unit test code should generally be written after the code
unit it is testing, but the first draft of your unit test plan should be
written before you write the first line of code.

Unless you are using TDD where you do write the tests just before the
code. The code is written to pass the test. Your "unit test plan" is
simply "Use TDD" :)
 
I

Ian Collins

James said:
On 01/27/2014 12:59 PM, glen herrmannsfeldt wrote:
....

In itself, yes, though nothing prevents the combination of debugging
printf()s and running an actual debugger.

There's a third option: use something like dtrace to probe a running
application. The use of dtrace probes (both system and user) removes
the need for printfs and often saves having to use a debugger. Writing
code (D scripts) to analyse code can be fun!
 
I

Ike Naar

Unless you are using TDD where you do write the tests just before the
code. The code is written to pass the test. Your "unit test plan" is
simply "Use TDD" :)

Is the writer of the code supposed to know what the test looks like?
If so, they could easily cheat.
Let's have an example: suppose a function "primeafter" must be written
that takes a number, say n, as input, and produces as output the first
prime number >=n.
The test suite may consist of
input expected output
1 2
10 11
100 101
1000 1009
10000 10007
100000 100003
1000000 1000003

If the programmer knows this is the test suite, it's easy to write a
function that passes the test:

int primeafter(int n)
{
switch (n)
{
case 1 : return 2; break;
case 10 : return 11; break;
case 100 : return 101; break;
case 1000 : return 1009; break;
case 10000 : return 10007; break;
case 100000 : return 100003; break;
default: return 1000003;
}
}

but obviously this is not what was intended.
How does TDD cope with this?
 
K

Kaz Kylheku

The writer of the code is the writer of the test.


Cheating one's self is rather pointless!

I think Ike has a very good point there.

Yes, cheating oneself deliberately is pointless.

Yet psychology tells us that people fool themselves all the time. They
selectively look for evidence that confirms their beliefs and reject evidence
that is contrary; they sabotage their efforts at success due to their fears;
they invent rationalizations **after** taking an action about why they took
that action, and so on.

If you write a test first, then you may subconsciously write the function to
meet the test, because you rememeber the structure of the test, and passing the
test is an important requirement. You may be so focused on the test, that you
neglect to think about the other aspects of the code.

Writing tests first is a psychological strategy, maybe even a good one, but
there is no perfect psychological strategy.

Sometimes in code, there are important test cases which are particulars that
occur as consequences of some general behaviors or properties of the solution.
If you start adding hacks for detecting those particulars and supplying special
cases for them in the code, then I would say that at that point test-driven
concept is showing signs of starting to fall appear.
 
I

Ian Collins

Kaz said:
I think Ike has a very good point there.

Yes, cheating oneself deliberately is pointless.

Yet psychology tells us that people fool themselves all the time. They
selectively look for evidence that confirms their beliefs and reject evidence
that is contrary; they sabotage their efforts at success due to their fears;
they invent rationalizations **after** taking an action about why they took
that action, and so on.

If you write a test first, then you may subconsciously write the function to
meet the test, because you rememeber the structure of the test, and passing the
test is an important requirement. You may be so focused on the test, that you
neglect to think about the other aspects of the code.

Writing tests first is a psychological strategy, maybe even a good one, but
there is no perfect psychological strategy.

I agree. What you describe above is one of the pitfalls of writing code
on one's own. Having a pair who can keep an eye on what's being written
goes a long way to mitigating that risk.
 
R

Rick C. Hodgin

Four or five times I think, when people suggested you read it in and
you said that that would make your build system much too complicated.

I never said it was too difficult. I said that if I pursued that course,
now now I'm maintaining more files than just the raw source files and
constructed executable. In addition, my particular needs are that I
need to have the pointers broken out to the start of each line, so now
I'm parsing loaded data, which means if I read in a file manually I'm
creating source code to parse it out, which means program logic, and more
opportunities for error.
That was before you made a build system that involved too compilers
just to avoid making the build complicated.

IIRC - it's been a long and tedious thread.

The solution I came up with does use two compilers, but it does not require
any programming or work after the initial setup is done. The compiler
handles all of it at compile time. I simply have the variable setup the
way I need it in the executable.

In addition, there is a tremendous gain there by having the new ability to
encode source code in any GCC tool, as well as Visual C++, and have them
link together into a single final product. I have never done that before,
and it was very exciting to see.

Best regards,
Rick C. Hodgin
 
R

Robbie Brown

I think Ike has a very good point there.

Yes, cheating oneself deliberately is pointless.

Yet psychology tells us that people fool themselves all the time. They
selectively look for evidence that confirms their beliefs and reject evidence
that is contrary; they sabotage their efforts at success due to their fears;
they invent rationalizations **after** taking an action about why they took
that action, and so on.

If you write a test first, then you may subconsciously write the function to
meet the test, because you rememeber the structure of the test, and passing the
test is an important requirement. You may be so focused on the test, that you
neglect to think about the other aspects of the code.

Writing tests first is a psychological strategy, maybe even a good one, but
there is no perfect psychological strategy.

Sometimes in code, there are important test cases which are particulars that
occur as consequences of some general behaviors or properties of the solution.
If you start adding hacks for detecting those particulars and supplying special
cases for them in the code, then I would say that at that point test-driven
concept is showing signs of starting to fall appear.

I think you're missing the point about TDD. upthread someone mentioned
about a method to calculate the next prime after some value.

If the requirements were to only return the next prime after 1, 10, 100,
1000 etc up to a given value then the example test would be fine, if the
test passes then the requirements have been met ... result.

If the requirements were to take 'any' value and calculate the next
prime then the test setup or 'fixture' was incorrect. Not meeting
requirements in the fixture is just as bad as not meeting them in the
end result code. If the fixture is wrong then so are the results (almost
guaranteed). Writing code to pass a well defined test will result in
code that passes the test and therefore meets requirements. it's not
magic but it does force you to think about exactly what you are doing
before you actually do it.

It's been a number of years since I used it in anger (JUnit for Java)
but I seem to remember it worked rather well.
 
G

glen herrmannsfeldt

Robbie Brown said:
On 28/01/14 00:22, Kaz Kylheku wrote:

(snip, someone wrote)
(snip)
I think you're missing the point about TDD. upthread someone mentioned
about a method to calculate the next prime after some value.
If the requirements were to only return the next prime after 1,
10, 100, 1000 etc up to a given value then the example test would
be fine, if the test passes then the requirements have been
met ... result.

Yes, but the example isn't very realistic. No-one, when assigned to
write a program to return the next prime, will write one that special
cases a few values.

On the other hand, give a challenge/response system, one might
find a back door by predicting the expected response.

I was recently working on a Coursera bioinformatics course where
to pass a programming assignment you download a test file, and then
have five minutes to produce the appropriate output and submit it.

In most cases you can't cheat, by writing a program for the subset
of the test cases, but maybe not all.

There might be other program assignment automatic grading
systems that don't use a large enough set of test cases.
(That is, fool the TA, but not yourself.)

Also, the OS/360 examples from "Mythical Man-Month" included cases
where people were given a byte limit on their code. OS/360 also
includes an overlay linker, so when someone approached the limit
they could start using overlays. I might even imagine someone meeting
the test cases (speedier debugging) without overlays, but in actual
use it would run too slow.

-- glen
 
R

Robbie Brown

(snip, someone wrote)


Yes, but the example isn't very realistic. No-one, when assigned to
write a program to return the next prime, will write one that special
cases a few values.

But 'write a program to return the next prime' isn't much of a
requirement is it?, next prime to what, what are the boundary cases?
I'm no mathematician but I think there may possibly be an infinite
number of them. How big do you want to go, how long do you want to
continue? All these parameters will (should) be part of the specification

Nothing can really defend against someone determined to bugger things
up, tests are really just another tool. IME they certainly do help to
focus my (imperfect) mind
On the other hand, give a challenge/response system, one might
find a back door by predicting the expected response.

But what can be done about that, it's not the fault of the test
framework if someone writes a test that doesn't meet requirements.
Understanding the requirements, writing a test to meet those/that
requirement and then 'finding a back door' to hide the fact that the
code doesn't pass the test is self defeating. Where I worked you'd be
out the door quick sharp.

It's just another tool.
 
I

Ike Naar

I think you're missing the point about TDD. upthread someone mentioned
about a method to calculate the next prime after some value.

If the requirements were to only return the next prime after 1, 10, 100,
1000 etc up to a given value then the example test would be fine, if the
test passes then the requirements have been met ... result.

If the requirements were to take 'any' value and calculate the next
prime then the test setup or 'fixture' was incorrect. Not meeting
requirements in the fixture is just as bad as not meeting them in the
end result code. If the fixture is wrong then so are the results (almost
guaranteed). Writing code to pass a well defined test will result in
code that passes the test and therefore meets requirements. it's not
magic but it does force you to think about exactly what you are doing
before you actually do it.

Let's say the requirement was that the algorithm should work for any
positive integer, and let's suppose that integers are 32 bits wide..
Then a test setup or fixture that would not be "wrong" would require
approximately 2 to the power 31 test cases. That's a huge number of
test cases.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
474,077
Messages
2,570,566
Members
47,202
Latest member
misc.

Latest Threads

Top