automated coding standard validation?

R

Richard Heathfield

Ian Collins said:
Roberto Waltman wrote:
What a daft rule! I'd like to see a tool that can enforce that one.

int casetest(const char *s, const char *t)
{
int diff = 0;
while(diff == 0 && *s != '\0' && *t != '\0')
{
if(tolower((unsigned char)*s++) != tolower((unsigned char)*t++))
{
diff = 1;
}
}
if(*s != '\0' || *t != '\0')
{
diff = 1;
}
return diff;
}

int underscoretest(const char *s, const char *t)
{
int diff = 0;
while(diff == 0 && *s != '\0' && *t != '\0')
{
while(*s == '_')
{
++s;
}
while(*t == '_')
{
++t;
}
if(*s++ != *t++)
{
diff = 1;
}
}
return diff;
}

etc etc ad nauseam.
 
I

Ian Collins

Richard said:
Ian Collins said:

Roberto Waltman wrote:


What a daft rule! I'd like to see a tool that can enforce that one.

[comparison code snipped]

etc etc ad nauseam.
That's the easy bit, parsing the source for identifiers and comparing
them all would be more fun.
 
M

Michael Mair

Ian said:
An XP team doesn't wast any time on code reviews, the practices
(collective code ownership and pair programming) take care of them.

Please use only C related or standard usenet abbreviations.

I assume that you are talking about eXtreme Programming.
As there are no reliable statistics or universal / general
experiences that say "'XP' is always better than the good old
way (whatever this may be, with or without code reviews)", let
us not waste time with an off-topic debate which has nothing
to do with te OP's request.

If someone is stuck with MISRA because the only customer (say,
a large automobile manufacturer) dictates the conformance, then
one uses a MISRA checker -- independently of the way the
software comes into existence.
If they weren't, how would the code compile?

IIRC (I don't have the MISRA stuff at home) unique among all
legal identifiers throughout the whole programme in all
namespaces, i.e. the bad old
typedef struct foo foo;
or
struct foo {....} bar;
int foo;
are no longer is allowed.
What a daft rule! I'd like to see a tool that can enforce that one.

Parse all translation units, collect all macro and other identifiers,
throw out all underscores,
a) replace all '0'/'D' with 'O' and so on
b) uppercase them
c) sensible combination of a) and b).
Sort the whole thing for a), b), c), respectively and check for
uniqueness.
That bad?

Your unit tests will flush this one.

Your lint tool or the compiler warnings do so much earlier.

Please don't misunderstand me:
Tools to check "coding standards" by no means replace proper
design, coding, and tests but they can check the stuff that
everyone else's time is too expensive for or where mechanical
dumb checking is sufficient.


Cheers
Michael
 
M

Michael Mair

Ian said:
Richard said:
Ian Collins said:
[comparison code snipped]
etc etc ad nauseam.

That's the easy bit, parsing the source for identifiers and comparing
them all would be more fun.

You must be kidding. As long as you are not doing Very Bad Things
-- which you shouldn't in the first place, as we are discussing
about coding standards that disallow them ;-) -- a simple state
machine gives you
preprocessor directive lines
string literals
character literals
comments
"rest"
Throw out string literals, character literals, and comments, throw
out string and character literals from preprocessor directive lines.
Parse "rest" and preprocessor directive lines appropriately.
Use intelligent data structures and/or hashing to avoid high storage
or time consumption. Use out-of-the-box stuff for this.
This is a matter of hours if done with the right tool (say Perl or
python; heck, even awk can do that easily).

The only thing to decide is whether keywords/preprocessor directives
should be part of your identifier list and what kinds of comments etc.
are acceptable.


Cheers
Michael
 
I

Ian Collins

Michael said:
Ian said:
Richard said:
Ian Collins said:

Roberto Waltman wrote:


<snip>

--> "Identifiers will not differ by:
* Only a mixture of case
* The presence/absence of the underscore character
* The interchange of the letter 'O', with the number '0' or the
letter 'D'
* The interchange of the letter 'I', with the number '1' or the
letter 'l'
* The interchange of the letter 'S' with the number '5'
* The interchange of the letter 'Z' with the number '2'
* The interchange of the letter 'n' with the letter 'h'. "
(Joint Strike Fighter Air Vehicle C++ Coding Standards 2005,
Rule 48)


What a daft rule! I'd like to see a tool that can enforce that one.
[comparison code snipped]
etc etc ad nauseam.


That's the easy bit, parsing the source for identifiers and comparing
them all would be more fun.


You must be kidding. As long as you are not doing Very Bad Things
-- which you shouldn't in the first place, as we are discussing
about coding standards that disallow them ;-) -- a simple state
machine gives you
preprocessor directive lines
string literals
character literals
comments
"rest"
Throw out string literals, character literals, and comments, throw
out string and character literals from preprocessor directive lines.
Parse "rest" and preprocessor directive lines appropriately.
Use intelligent data structures and/or hashing to avoid high storage
or time consumption. Use out-of-the-box stuff for this.
This is a matter of hours if done with the right tool (say Perl or
python; heck, even awk can do that easily).

The only thing to decide is whether keywords/preprocessor directives
should be part of your identifier list and what kinds of comments etc.
are acceptable.
True, but the rule is from a C++ Coding Standard, which adds another
level of complexity to the parsing.
 
I

Ian Collins

Michael said:
Please use only C related or standard usenet abbreviations.

I assume that you are talking about eXtreme Programming.
As there are no reliable statistics or universal / general
experiences that say "'XP' is always better than the good old
way (whatever this may be, with or without code reviews)", let
us not waste time with an off-topic debate which has nothing
to do with te OP's request.
Sorry if techniques for developing reliable C applications are
considered off topic here.

I was simply trying to point out there are often (but not always)
process techniques that can replace tools.
 
R

Roberto Waltman

<OT>
I have never worked on projects using "extreme programming"
techniques, so my opinions are based only on readings on the subject
and other people reports.

Based on my inexperience, I don't accept the premise that pair
programming will have a significant impact on the quality of the code
produced. I can see that "four eyes are better than two" on spotting
logic errors, but I also believe that two people working together may
influence each other into making the same mistakes, and/or the code
may reflect compromises done to keep harmony in the team as opposed to
reflecting what each member though is the best solution for a problem.

A formal code review brings people with more of an "outsider" point
of view, more detachment and more impartiality.

Also there are projects or situations, especially in embedded systems,
(especially in *large* embedded systems,) were the extreme programming
principles of making frequent releases and involving customer feedback
are just impossible to follow. (The hardware does not exist yet, the
custom building were the hardware will be installed is not complete
yet, the wiring required to run a system simulation will be installed
2 months after the building is complete, and the customer will not
provide the 150 workers for half a day required to run a simulation
anyhow, except for a few pre-defined acceptance tests at project
milestones, determined when the contract was signed.
All this from a *real* project.)
I was simply trying to point out there are often (but not always)
process techniques that can replace tools.

Process and tools should complement each other, but for the type of
validation I was referring to in my previous post, (syntax and naming
conventions believed to reduce the possibility of making mistakes) I
have no doubt that tools should take precedence.
Isn't "automate, automate, automate" one of the extreme programming
mottos?

With regards to the coding rule you called 'daft': ("Identifiers will
not differ by: ... The interchange of the letter 'I', with the number
'1' or the letter 'l')

1 int *port11;
2 int *port1l;
3 int *port1I;
4 int *portII;
5 int *portlI;

All this declarations are different yet, with the font my newsreader
is using, I can not tell the difference between lines 4 and 5. That
rule eliminates a potential source of errors that COULD NOT BE
DETECTED by visual inspection.
I would call it a life saver...
 
I

Ian Collins

Roberto said:
<OT>
I have never worked on projects using "extreme programming"
techniques, so my opinions are based only on readings on the subject
and other people reports.

Based on my inexperience, I don't accept the premise that pair
programming will have a significant impact on the quality of the code
produced. I can see that "four eyes are better than two" on spotting
logic errors, but I also believe that two people working together may
influence each other into making the same mistakes, and/or the code
may reflect compromises done to keep harmony in the team as opposed to
reflecting what each member though is the best solution for a problem.

A formal code review brings people with more of an "outsider" point
of view, more detachment and more impartiality.
That's why we have collective code ownership and swap pairs every couple
of hours. Over a short time, several people will work on and thus
review the code.
Also there are projects or situations, especially in embedded systems,
(especially in *large* embedded systems,) were the extreme programming
principles of making frequent releases and involving customer feedback
are just impossible to follow. (The hardware does not exist yet, the
custom building were the hardware will be installed is not complete
yet, the wiring required to run a system simulation will be installed
2 months after the building is complete, and the customer will not
provide the 150 workers for half a day required to run a simulation
anyhow, except for a few pre-defined acceptance tests at project
milestones, determined when the contract was signed.
All this from a *real* project.)

That doesn't stop you following the other practices and you are in a
good position to release something once the hardware is ready. You can
also include the hardware developers as customers and provide them with
the code they require for their testing, when they require it.

I've managed three very successful embedded XP projects.
Process and tools should complement each other, but for the type of
validation I was referring to in my previous post, (syntax and naming
conventions believed to reduce the possibility of making mistakes) I
have no doubt that tools should take precedence.

Not if your process make those mistakes far less likely. If you are
building the system test first lots of those conditions (the if (x=0)
scenario for example)just don't happen. You break a test, undo what you
did last, redo it and retest.
Isn't "automate, automate, automate" one of the extreme programming
mottos?
It is, all tests, both unit and acceptance tests are automated and run
often.
With regards to the coding rule you called 'daft': ("Identifiers will
not differ by: ... The interchange of the letter 'I', with the number
'1' or the letter 'l')

1 int *port11;
2 int *port1l;
3 int *port1I;
4 int *portII;
5 int *portlI;

All this declarations are different yet, with the font my newsreader
is using, I can not tell the difference between lines 4 and 5. That
rule eliminates a potential source of errors that COULD NOT BE
DETECTED by visual inspection.

But it would fail a test!
I would call it a life saver...

So is the test...
 
M

Michael Mair

Ian said:
Sorry if techniques for developing reliable C applications are
considered off topic here.

That was not the point.
There are dumb checks computers are better at than humans.
As computer time is cheap nowadays, only the cost of buying
an appropriate tool or rolling your own counts.
The OP did not state much about the development process and
even for Extreme Programming, conformance to coding standards
controlled by tools and code reviews may be a point on the
agenda.

I was simply trying to point out there are often (but not always)
process techniques that can replace tools.

Which does not contradict what I said in
<[email protected]>.
The "off-topic debate" I wanted to avoid is the one about
the usefulness of Extreme Programming in all areas where
code is developed.


Cheers
Michael
 
D

Dann Corbit

Roberto Waltman said:
<OT>
I have never worked on projects using "extreme programming"
techniques, so my opinions are based only on readings on the subject
and other people reports.

Based on my inexperience, I don't accept the premise that pair
programming will have a significant impact on the quality of the code
produced. I can see that "four eyes are better than two" on spotting
logic errors, but I also believe that two people working together may
influence each other into making the same mistakes, and/or the code
may reflect compromises done to keep harmony in the team as opposed to
reflecting what each member though is the best solution for a problem.

A formal code review brings people with more of an "outsider" point
of view, more detachment and more impartiality.
<ot>
I have done lots of pair programming. It is one part of extreme program
that I am absolutely sure has a very solid payback.

If you have never tried it, try it.
</ot>
[snip]
 
I

Ian Collins

Al said:
Would those projects not have been successful without XP?
Two of them possibly, but I doubt we would have had such low defect
rates, or been able to add new features as quickly as we did.

The third, no, the requirements became very fluid and we ended up
releasing the product to one customer many months before it was
'finished' and updating their units as they required extra features.
 
I

Ian Collins

tedu said:
which test? i've heard rumors that not all developers achieve 100%
test coverage before the product ships.
If they are doing Test Drive Development correctly, they should.
 
I

Ian Collins

Al said:
Not by testing. As the old saying goes, testing can only prove the
presence of bugs, not the absence of bugs.
That's where we differ, when you use TDD, your units test /are/ your
live runnable specification. So if they pass, the code meets its
specification.
 
A

Al Balmer

That's where we differ, when you use TDD, your units test /are/ your
live runnable specification. So if they pass, the code meets its
specification.

Little comfort to the customer who manages to break it, but a good
reason for making the customer responsible for the specification.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
474,184
Messages
2,570,978
Members
47,561
Latest member
gjsign

Latest Threads

Top