Ian Collins wrote, On 28/08/07 04:46:
If performed, internal formal testing is still a step away from
developer testing.
Yes. However, above you said that it should not matter for unit testing
whether you use the same compiler or not. Since unit testing can and
often *is* formal such a statement is at least misleading. Had you said
that it did not matter for informal testing and had the OP been asking
about informal testing you might have a point, but it was never stated
that the unit testing was informal.
How so? A unit test suite doesn't just vanish when the code is
released, it is an essential part of the code base.
Simple. If it is not formal then you (the next developer) have no
guarantee that it is in a usable state. So you, the next developer, have
to fully validate any tests you will rely on during your development.
That depends on your definition of Acceptance tests. In our case, they
are the automated suite of tests that have to pass before the product is
released to customers.
Yes, this could be a matter of definition. To me an acceptance test is
the customer coming in and witnessing some pre-agreed tests where if
they pass the customer will accept the SW and/or HW (and pay for it). It
has nothing to do with whether the company is prepared to give the SW to
the customer.
Again, that depends on your process.
I've not worked for a company where they would be prepared to try and
get a customer to accept SW before having a decent level of confidence
that it is correct *and* acceptable to the customer.
Why? Our acceptance test are very comprehensive, written by
professional testers working with a product manager (the customer).
It sounds like you don't have fully automated acceptance tests. Where
ever possible, all tests should be fully automated.
It is not possible for a reasonable cost to fully automate all testing.
On a number of projects I have worked on the formal testing included
deliberately connecting up the system incorrectly (and changing the
physical wiring whilst the SW is running), inducing faults in the HW
that the SW was intended to test, responding either correctly and
incorrectly to operator prompts, putting a plate in front of a camera so
that it could not see the correct image whilst the SW is looking at it,
swapping a card in the system for a card from a system with a different
specification etc. It would literally require a robot to automate some
of this testing, and some of the rest of it would require considerable
investment to automate. Compared to the cost of the odd few man-weeks to
manually run through the formal testing with a competent whiteness the
cost of automation would be stupid.
BTW, on the SW I am mainly thinking of there were so few bug reports
that on one occasion when the customer representative came to us for
acceptance testing, a few years after the previous version, both the
customer representative and I could remember all of the fault reports
and discuss why I new none of them were present in the new version. The
customer representative was *not* a user (he worked for a "Procurement
Executive" and not for the organisation that used the kit), so he would
not have seen it for several years.
If you doubt the quality of the manual testing, then look at how many
50000 line pieces of SW have as few as 10 fault reports from customers
over a 15 year period. Most of those fault reports were in the early
years, and *none* were after the last few deliveries I was involved in.
BTW, if they are still using the SW at the start of 2028 we have a
problem, but that is documented and could easily be worked around.