Why Case Sensitive?

F

Flash Gordon

Ian Collins wrote, On 20/07/07 23:11:
But the design may be done in code...

You still have some vague idea of a design before you start writing
code, after all how can you write anything if you don't know what you
are writing?
 
F

Flash Gordon

Malcolm McLean wrote, On 20/07/07 23:08:
It doesn't work like that.
I've had this before from people who work in businessy type systems and
think that games can be formally specified using the same techniques.

I've worked on lots of different things.
They can't and no games company does that, for lots of reasons.

Doesn't sound like scientific programming to me!
> One is
that no one cares if output is correct, just whether it plays.

The people I've met in the games industry have the game planned (story
boarding or whatever) before they start writing it. This game plan/story
board/etc is what some would call a simple set of requirements.
> Another
is that you don't know how well the program is going to perform until it
is in quite an advanced stage of development, at which point you decide
how much geometry to throw at the rasteriser.

It is easy to design systems such that you can change parameters, such
as how much detail. In fact, since games are often made to run on
systems with vastly different capabilities they have to allow for such
things or have the expense of writing the game from scratch several times!
Scientific programming is a bit different to games, but again you can't
generally specify the program and then write it. If you can do that then
it's not reasearch.

Just because you don't know if something will work does not mean you
don't have requirements for it. I've worked on projects where the entire
point of the project was to see if something would work, or to compare
different approaches to doing things etc, and we still had requirements
for them. It's just that the requirements did not specify how well the
algorithm would work. Oh, and in some cases the research reached the
conclusion things were not viable, at least not with the defined approach.

I'm not saying all requirements are nailed down to the last detail, and
indeed with business SW they are often not nailed down that tightly, but
even if the requirement is just "implement this type of neural network
of this size to be trained like this and then see if it can do the job"
it is still a requirement. If a requirement is to add a nice user
interface *if* the method proves worthy of further testing, then that is
still an extensibility requirement.
 
I

Ian Collins

Flash said:
Ian Collins wrote, On 20/07/07 23:11:

You still have some vague idea of a design before you start writing
code, after all how can you write anything if you don't know what you
are writing?

You start writing tests, concreting up your initial requirements. Your
tests then become the living design document.
 
F

Flash Gordon

Ian Collins wrote, On 21/07/07 00:16:
You start writing tests, concreting up your initial requirements.

Your tests are still have to be designed if they are to be worth having.
You also need to know have some idea of what you are going to write if
you are going to write a test for it. Also I've seen code that would
pass any test based on the requirements (because it worked to spec) but
the code was still badly designed and needlessly inefficient.
> Your
tests then become the living design document.

All of your documents should be living documents which are updated
as/when problems are found in them or new requirements come along.
 
I

Ian Collins

Flash said:
Ian Collins wrote, On 21/07/07 00:16:

Your tests are still have to be designed if they are to be worth having.

You use the tests to drive the design.
You also need to know have some idea of what you are going to write if
you are going to write a test for it. Also I've seen code that would
pass any test based on the requirements (because it worked to spec) but
the code was still badly designed and needlessly inefficient.
When you write test first, this problem is much easier to solve and less
likely to happen because the tests give the safety net to continuously
refactor the code. You really have to do it to understand the process.
All of your documents should be living documents which are updated
as/when problems are found in them or new requirements come along.

Which is why tests are the ideal documentation, they always define the
operation of the code and can never be out of date. If a problem is
found, add a new failing test and fix the code so it passes.
 
S

SM Ryan

# Why is C case sensitive?

Why not? The decision is arbitrary.

# In real life such a situation would be ridiculous (eg. the 16 ways of
# writing my name) and, in some cases, frightening.

Writing programs is not the same as writing novels.
 
M

Malcolm McLean

Ian Collins said:
You use the tests to drive the design.
You do need some idea what you are going to achieve, as Flash points out. No
point writing endless Windows GUI code if you plan a website backend, or
efficient matrix routines if you are writing a text adventure.

However generally you know you are going to need some components, and these
will be reuseable. For instance I was writing a few utilities to back up the
protein folding. These needed command-line arguments.

getopt() is non-standard, so I needed a options parser built on top of ANSI
C. Looking at the getopt interface, I thought "this isn't all that
well-designed, I'm sure I could do better". I also didn't find exactly what
I wanted on the web. So options.c was born, designed to make life easy for
the programmer whilst still providing a reasonably high standard of parsing.
There is a very difficult problem when a long option is an anagram of
allowed flags (eg we allow flags a-f and r for recursive, but also long
option -fred).

So the module is written, tested, seems to be OK. The exact API is sorted
out in testing and coding, because all I want to do is write a parser. At
this stage I am in control, there is no legacy code that has to be
maintained, I haven't promised any interface or even functionality to any
other person. Often,as with the anagram problem, it is only when you start
testing and coding that you realise some inherent problems, and that affects
the design. I didn't want to demand a flags parameter in the option
constructor, because you pass that information in anyway in the queries.
However there is simply no easy way of avoiding it if you allow a list of
one-letter flags in option 1 as well as long options, and that is such a
convention that you can't break it.

That's just one little module, a jigsaw piece. Unfortunately the programs
cannot be simply an assembly of precut pieces. You always need specialised
pieces, sometimes you need to modify a piece.
 
I

Ian Collins

Malcolm said:
You do need some idea what you are going to achieve, as Flash points
out. No point writing endless Windows GUI code if you plan a website
backend, or efficient matrix routines if you are writing a text adventure.
True, but what you don't need to do is to spend time working out the
details before you start to code.
 
F

Flash Gordon

Ian Collins wrote, On 21/07/07 01:51:
You use the tests to drive the design.

Are you claiming you write tests for your tests? If not, you *still*
have to design your tests. If you do write tests for your tests then you
need to design those, so you still do not escape from needing to do some
designing.

Take an example, there is a requirement to keep a gimbal mounted camera
pointed at the same location on the ground however the aircraft moves,
write the tests. Since there are multiple solutions (multiple gimbal
positions which will point at the same place) you need to design your
tests to prove that the correct solution is reached, and the shortest
path will be taken, and work out where the edge cases are etc. I know,
because on such a system I spent a long time designing the tests (longer
than actually writing them), because getting the tests right was
difficult. I even had to test my tests by reading the values rather than
the pass/fail results and do a sanity check on them. This was for a core
of probably about 20 lines of code implementing a requirement that was 2
equations and 1 paragraph of text.
When you write test first, this problem is much easier to solve and less
likely to happen because the tests give the safety net to continuously
refactor the code. You really have to do it to understand the process.

That has not solved the problem of bad code that passes the tests. It
means you will know if your rewrite still passes the tests, but it has
not saved you from having to do the rewrite. So you still need to design
your SW (to an appropriate level for the size/criticality of the SW) so
you end up with something that does not need rewriting as soon as there
is a change in requirements or an additional requirement.
Which is why tests are the ideal documentation, they always define the
operation of the code and can never be out of date. If a problem is
found, add a new failing test and fix the code so it passes.

No, tests are not ideal documentation because they do not tell you the
structure of the code, in fact there is *no* ideal documentation just
the best that can be achieved within the various constraints. Tests also
need to be designed to be useful, so you need a higher level requirement.

Of course, I don't expect a 20 page requirement specification and full
<methodology of choice> design for a 20 line program, nor do I expect as
much for a game as for a safety critical system.
 
F

Flash Gordon

Ian Collins wrote, On 21/07/07 08:29:
True, but what you don't need to do is to spend time working out the
details before you start to code.

Not all design is going down to pseudo-code with a 1:1 mapping to the
actual code. If you think that designing SW means always reducing it to
the nth degree then you need to learn more bout designing SW.
 
M

max.trinitron

Why is C case sensitive?

I know it's a bit late to change it now but there would seem to be far
more advantages in ignoring letter case in source code.

I have seen Pascal (which has case insensitive identifiers)
programs, which try to use camel case in an inconsistant way.
IMHO it looks ugly when you write identifiers different with
every use:

CamelCase, camelCase, camelcase, Camelcase, cameLcase, ...

Case sensitivy forces you to write the identifiers
always the same.

Max
 
I

Ian Collins

Flash said:
Ian Collins wrote, On 21/07/07 01:51:

That has not solved the problem of bad code that passes the tests. It
means you will know if your rewrite still passes the tests, but it has
not saved you from having to do the rewrite. So you still need to design
your SW (to an appropriate level for the size/criticality of the SW) so
you end up with something that does not need rewriting as soon as there
is a change in requirements or an additional requirement.
I don't deny you have to design the tests, if fact I find designing
tests a very good way of stepping back form the code. So I think we
agree to some extent. A well thought out set of tests should lead to a
well structured piece of code that does exactly what is required to
solve the problem and no mode.

One thing I have never seen is a piece of code that does not require a
rewrite if its requirements change, that kind of implies it did the
wrong thing or more than was required in the first instance.
No, tests are not ideal documentation because they do not tell you the
structure of the code, in fact there is *no* ideal documentation just
the best that can be achieved within the various constraints. Tests also
need to be designed to be useful, so you need a higher level requirement.
The tests do provide one vital form of documentation that nothing else
can: examples of how to use the code. I'm sure I'm not alone in heading
for the examples section of any manual page so see how the functions in
question should be used.

With the tests filling the niche of the detailed requirement (or use
case description), the higher level requirement can be fairly abstract
and brief.

I do believe that we require written requirements, but I believe these
should be written by and for the customer or users, not the engineers.
Tests convert the user's requirements into the functional design
requirements of the code.
 
K

Kenny McCormack

I have seen Pascal (which has case insensitive identifiers)
programs, which try to use camel case in an inconsistant way.
IMHO it looks ugly when you write identifiers different with
every use:

CamelCase, camelCase, camelcase, Camelcase, cameLcase, ...

Case sensitivy forces you to write the identifiers
always the same.

Max

Yes. As I always use to tell the story, in the original 'vi' editor
(that is, before the full-functioned clones came out) there wasn't any
simple way to search case-insensitively. So, whenever I was dealing
with a case-insensitive language, it was necessary to do something like:

/[Ss][Oo][Mm][Ee][Dd][Uu][Mm][Bb][Ii][Dd][Ee][Nn][Dd][Tt][Ii][Ff][Ii][Ee][Rr][Tt][Hh][Aa][Tt][Mm][Ii][Gg][Hh][Tt][Bb][Ee][Ii][Nn][Aa][Nn][Yy][Cc][Aa][Ss][Ee]/

Because, of course, I could never trust the other programmers working on
the project to capitalize in a consistent way.

Now that vim (and others) have built-in support for this, it's not such
a big deal, but it is still a PIA. As it turns out, one of the
languages in which I currently program *is* case-insensitive and I still
find it quaint that I need to turn the "ic" option on when editing.
 
F

Flash Gordon

Ian Collins wrote, On 21/07/07 12:25:
I don't deny you have to design the tests, if fact I find designing
tests a very good way of stepping back form the code. So I think we
agree to some extent. A well thought out set of tests should lead to a
well structured piece of code that does exactly what is required to
solve the problem and no mode.

Yes, there is probably more agreement between us than there might at
first appear.
One thing I have never seen is a piece of code that does not require a
rewrite if its requirements change, that kind of implies it did the
wrong thing or more than was required in the first instance.

There is a *very* big difference between adding/changing a few lines
than in rewriting. I have only very occasionally had to do a rewrite
because of a change in requirements, it has almost always been small
changes to the existing code and anything from a small to a large amount
of additional code. The times where I have had to do major rewrites it
has always been with stuff where one look was enough for me to say it
was badly designed.
The tests do provide one vital form of documentation that nothing else
can: examples of how to use the code.

Wrong. Any decent manual for a library will provide examples of usage.
> I'm sure I'm not alone in heading
for the examples section of any manual page so see how the functions in
question should be used.

See, you even find the example somewhere other than in the tests.
With the tests filling the niche of the detailed requirement (or use
case description), the higher level requirement can be fairly abstract
and brief.

This all depends on the project. For some this is true, for others it is
false.
I do believe that we require written requirements, but I believe these
should be written by and for the customer or users, not the engineers.

IMHO wrong. They should be written for *both* otherwise you have a
disconnect and a much higher chance of the code not doing what the
customer wants.
Tests convert the user's requirements into the functional design
requirements of the code.

Tests don't IMHO do a good job of telling you the data flows in a
system, nor how all the components fit together.
 
M

Malcolm McLean

Flash Gordon said:
Wrong. Any decent manual for a library will provide examples of usage.
You'd be surprised. I am currently writing BabyX, a toolkit for X Windows.
It is a long time since I last used Xlib, and it is my unhappy task to try
to understand the system. Very often I have to write little explorarory
programs.
IMHO wrong. They should be written for *both* otherwise you have a
disconnect and a much higher chance of the code not doing what the
customer wants.
Normally the customer doesn't really know what he wants. Inevitably he will
look at his current procedures and specify that they should be
"computerised". Which normally means that the process becomes less efficient
because computers are worse than pen and paper for form filling. However
there is also real potential in the computer, by changing the process.
Tests don't IMHO do a good job of telling you the data flows in a system,
nor how all the components fit together.
How does the system I am curently using fit together? I am typing this into
a window which is probably a "Rich Edit" control. It then goes into some
internal program-specific buffer in Microsoft Outlook Express. Then it talks
to the Vista system to be turned into an email. That goes to British
Telecom, thence to the Usenet system. Then your browser sucks it up, and
smehow it appears on screen. The process is highly intricate, but it hasn't
actually been designed by anyone. The components have been designed, their
interconnections haven't.
If you want an analogy, in a planned Communist economy all the "good flows"
were designed. Factory X would make 100 litres of red ink, factory Y 100,000
pens, which would be delivered to offices which had a ration of five red
pens per bureaucrat per year. In a free market economy, company X sets up a
plant to produce red ink, company Y a plant to make pens. Comany X decides
to produce only to contract whilst Y produces speculatively. So Y orders 100
litres of ink from X, makes the pens, and puts up an ad saying "red pens for
sale". The government offices then buy pens as they need them.
In general the unplanned network functions a lot better. Thats true of
computer systems as well. If you try to produce a dataflow diagram for an
entire company, either the diagram must be exactly right or someone
somewhere may lose a lot of money. If you give employees components - web
browsers, word processors, emails, and ethernet cables, they find ways of
linking them up to be productive.
 
I

Ian Collins

Flash said:
Ian Collins wrote, On 21/07/07 12:25:

Yes, there is probably more agreement between us than there might at
first appear.
I guess the differences come down to the process context in which tests
are used.
There is a *very* big difference between adding/changing a few lines
than in rewriting. I have only very occasionally had to do a rewrite
because of a change in requirements, it has almost always been small
changes to the existing code and anything from a small to a large amount
of additional code. The times where I have had to do major rewrites it
has always been with stuff where one look was enough for me to say it
was badly designed.
Then you have been lucky. I've frequently been in the position where a
design or algorithm is optimised for one set of requirements, but
unsuitable for another.
Wrong. Any decent manual for a library will provide examples of usage.
But the existence of a decent set of tests gives you this for free, so
the developers don't have to produce (and maintain) additional code.
See, you even find the example somewhere other than in the tests.
You misunderstand, the tests *are* the examples!
IMHO wrong. They should be written for *both* otherwise you have a
disconnect and a much higher chance of the code not doing what the
customer wants.
That depends on your process. If you are involving the customer in the
day to day development, of failing that providing them frequent (every
week or two) drops, this is unlikely to happen.
Tests don't IMHO do a good job of telling you the data flows in a
system, nor how all the components fit together.

From my perspective tests document the module they are testing, so some
form of connecting documentation is still handy.
 
F

Flash Gordon

Malcolm McLean wrote, On 21/07/07 22:34:
You'd be surprised.

No I would not, since I know there are a lot of terrible manuals for
libraries.
> I am currently writing BabyX, a toolkit for X
Windows. It is a long time since I last used Xlib, and it is my unhappy
task to try to understand the system. Very often I have to write little
explorarory programs.

That is a comment on the quality of the documentation for Xlib, not a
comment on whether a decent manual will provide examples of usage. It
may well also depend on which implementation of Xlib you are referring
to, since there is more than one.
Normally the customer doesn't really know what he wants.

Yes, which is why you need to write the requirements in a way the
customer understands.
> Inevitably he
will look at his current procedures and specify that they should be
"computerised". Which normally means that the process becomes less
efficient

This is why you have people who understand the relevant business go to
the customer and work through what the real requirements are.
> because computers are worse than pen and paper for form
filling.

That varies, since one of the input methods available is a physical form
with a special pen which people can fill in...

It also depends on whether the input system is properly designed for the
use to which it will be put.
> However there is also real potential in the computer, by
changing the process.

Yes, which is why you (the SW company) send in an expert who understnads
the business to go through all this stuff.
How does the system I am curently using fit together? I am typing this

<snip>

That massive example had nothing to do with whether tests document how a
system fits together or the data flows, so I really don't know why you
posted it. You seem to have completely missed the point of the discussion.
 
F

Flash Gordon

Ian Collins wrote, On 21/07/07 22:46:
I guess the differences come down to the process context in which tests
are used.

Also in how rigorously other parts of the process are enforced.
Then you have been lucky. I've frequently been in the position where a
design or algorithm is optimised for one set of requirements, but
unsuitable for another.

I've always used my knowledge of the application domain to design SW in
such a way that it can be modified.
But the existence of a decent set of tests gives you this for free, so
the developers don't have to produce (and maintain) additional code.

You misunderstand, the tests *are* the examples!

IMHO a good example is not necessarily the same as a good test. A good
example will show normal usage, a good test will stress the SW. Of
course, some of the tests might make good examples.
That depends on your process. If you are involving the customer in the
day to day development, of failing that providing them frequent (every
week or two) drops, this is unlikely to happen.

Both the customer and the development team still need to understand the
requirements.
From my perspective tests document the module they are testing, so some
form of connecting documentation is still handy.

So we agree that some design documentation is needed :)
 
I

Ian Collins

Flash said:
Ian Collins wrote, On 21/07/07 22:46:

I've always used my knowledge of the application domain to design SW in
such a way that it can be modified.

Then you have been lucky, my crystal ball is more opaque.

Many of the systems I have worked on have grown and changed
significantly from their original requirements, one I still support was
due for "completion" three years ago and it's still growing as the
market changes. If we had designed it for the current market
conditions, it would never have sold four years ago when it was
originally released.
So we agree that some design documentation is needed :)

"Some" is a fair compromise!
 
M

Malcolm McLean

Flash Gordon said:
That massive example had nothing to do with whether tests document how a
system fits together or the data flows, so I really don't know why you
posted it. You seem to have completely missed the point of the #
discussion.
By even trying to document "how a system fits together or the data flows"
you are making the same mistake as a Soviet central planner.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,995
Messages
2,570,230
Members
46,819
Latest member
masterdaster

Latest Threads

Top