Testing Program Question

N

Nick Keighley

Is _DEBUG_ERROR from Microsoft specification?
yes


 It is written in vector header.

it's in *microsoft's* vector header. They stuck an underscore on the
front, that generally means it's implementaion specific.

 I think that Microsoft wrote vector header [on] their own.  

someone has to

Do C++ Standard Library use cerr or clog to display error message instead
of _DEBUG_ERROR.

they might. Better to throw an exception I'd have thought. I very much
doubt the C++ standard specifies that library routines write to cerr
or clog when errors occur (apart from things like perror and assert!).

I plan to write error message in the debug version.

ok. I thought you wanted them in dialog boxes? I often write to a
dialog box /and/ a log file.
The error message
is able to detect argument in the function's parameter before it
triggers to warn or alert the programmer.

can't parse that

 For example, the argument
in the function's parameter should be in the range between 0 and 10.
If it exceeds 10, then error message is displayed to the screen.

The error message procedure is similar to vector and string like out
of range.

fine. Might be better to throw an exception or even to assert...
 
N

Nick Keighley

So that you can turn it off when you have to.  The designed use
would be to use some application specific macro, defined (or
not) in the command line, and then to wrap the (few) critical
functions in something like:

    #ifdef PRODUCTION
    #undef NDEBUG  // Just in case.
    #define NDEBUG
    #include <assert.h>
    #endif
    void critical_function()
    {
        //  ...
    }
    #undef NDEBUG
    #include <assert.h>

Why do you think you're allowed to include <assert.h> multiple
times, with it's meaning depending each time on the current
definition of NDEBUG?

ah! One of the best answers I've seen to "asserts must only be used
whilst debugging"!
That's your opinion.  It doesn't correspond to the design of the
feature, nor good programming practices.

quite. Almost sig snarfable. Note leaving lots of asserts in as the
only error handling strategy probably *isn't* acceptable for high
degrees of defensiveness. Anti-lock brakes?

<snip>
 
N

Nick Keighley

* Alf P. Steinbach:











Oh, discovery:

the reason that I've never used the alloca technique that I mention above seems
to be that alloca *is not* consistently defined on different platforms.

   <url:http://www.mkssoftware.com/docs/man3/alloca.3.asp>
   guarantees 0 on error,

   <url:http://msdn.microsoft.com/en-us/library/wb1s57t5(VS.71).aspx>
   guarantees a "stack overflow exception" on error, and

   <url:http://www.kernel.org/doc/man-pages/online/pages/man3/alloca.3.html>
   says the error behavior is undefined.

Perhaps the group can benefit from this info.


I understand it doesn't "play well" with C99's vararray. How well does
it interact with new/delete?
 
A

Alf P. Steinbach

* Nick Keighley:
I understand it doesn't "play well" with C99's vararray.

Didn't know that but it stands to reason; they compete for the same resource.

How well does it interact with new/delete?

I can't see any reason why the features should interact in any way. alloca
adjusts the stack pointer. new/delete allocate from some heap.

At least in Windows programming a major usage of alloca is to allocate strings
efficiently for conversion between char strings and wchar_t strings.

And there's no problem with that, when used correctly, but alloca is a very
fragile low-level mechanism.


Cheers,

- Alf
 
J

Jorgen Grahn

Programming errors are best handled by preventing them in the
first place: carefully specifying each function, and reviewing
the code which uses the function against the specification.


Asserts are a sort of life jacket, to prevent things from
fucking up too much when you screwed up.

I see them as a self-destruct device (like the termonuclear wristwatch
in the action movie /Predator/) but it amounts to the same thing ...

(I see there is a long debate further down in the thread. I'll try
to ignore it; I stated my views in a similar thread a year ago or so.)

/Jorgen
 
J

James Kanze

You are talking garbage sorry: saying "print" is an exception
or callbacks (abstract interfaces) are an exception is just
wrong.

Print may be an exception, but I'm not convinced. I think
various call inversion sequences are a definite exception. But
a lot of the experts disagree with me, and do insist that
virtual functions never be public. (Look at the std::streambuf
hierarchy, for example.)
These are only "exceptions" to some stupid rule that
only a subset of the C++ community happen to agree with
including yourself.

The "subset" includes most of the experts. People like Herb
Sutter, for example.
As I disagree with this rule they are not exceptions in but
simply normal C++ coding practice.

Normal for inexperienced programmers, perhaps, or those who
don't care about quality.
 
J

James Kanze

A hard limit and graceful abort and/or user notification in
anticipation of what would be a stack fault is more desirable
than an actual stack fault.

I agree but...
It is not that difficult to determine a hard limit, it will be
proportional to the size of your stack.

To determine the limit, you have to know how much stack you are
using, and how much stack is available. In general, you know
neither.
 
J

James Kanze

Slightly off-topic but C# (Windows Forms) uses public virtual
functions extensively it seems so again are people at
Microsoft not experts nor care about quality? C# is supposed
to be the latest and greatest.

C# is not C++, so I wouldn't expect the same idioms to apply.
And I'm not sure I'd consider Microsoft as an example when it
comes to quality.
 
B

Brian

Sorry but I disagree with your opinion.

For the most part, I'm not stating opinion. Look closely at the
design of assert, and the guarantees it gives you.
Different software has different requirements regarding how
defensive you should be. A typical application should not be
using assert to terminate in a released product.

A typical application in what domain. It's clear that any
critical software must terminate as soon as it is in doubt. And
I've pointed out why this is true for an editor (and the same
logic also holds for things like spreadsheets). I also
recognize that there are domains where it isn't true. I'm not
sure, however, what you consider "typical".
A released product should be using exceptions for errors which
are valid during runtime. Assert is used for catching
programming errors, not valid runtime errors or bad user
input.

There's no disagreement on that.


Programmers should get into the habit of adequately testing
their software prior to release (assert helps with this) and
users should get into the habit of regularly backing up their
important data.

It would help if you'd read what was written, before disagreeing
with it. No one is arguing against testing. And it's not the
user who's backing up his data, it's the editor---all of the
editors I know to day regularly checkpoint their data in case
they crash. The whole point is that if there is a programming
error, and the editor continues, it's liable to overwrite the
checkpoint with corrupt data (or the user, not realizing that
the data is corrupt, is liable to overwrite his own data).
Using assert liberally is fine (I have no problem with this)
but this (in most cases) is an aid during development only,
rather creating hundreds of crash points in a released product
(in most cases).

If you've tested correctly, leaving the asserts active creates
zero crash points in the released product. And if you've missed
a case, crashing is the best thing you can do, rather than
continuing, and possibly destroying more data.

[...]
I am sorry but you are wrong, you should either be extremely
defensive or not defensive at all, somewhere in-between is
pointless.

When someone starts issuing statements as ridiculous as that, I
give up. It's not humanly possible to be 100% defensive.
Extremely defensive means at least one assert at some point
after call a function which has side effects (which could be a
precondition check before a subsequent function call). This
is overkill for typical desktop applications for example.
It is a nonsense to say that virtual functions shouldn't be
public: a public virtual destructor is fine if you want to
delete through a base class pointer.

The destructor is an obvious exception. But most experts today
generally agree that virtual functions should usually be either
protected or private.

Again, there are exceptions, and I have classes with only public
virtual functions. (Callbacks are a frequent example.) But
they're just that: exceptions.
Bjarne Stroustrup's first virtual function example in TC++PL
is a public Employee::print() method, I see no problem with
this.

For teaching, neither do I. (For that matter, a print function
might be an exception. It's hard to imagine any reasonable pre-
or post-conditions.)
You are probably thinking of virtual functions which are
called as part of some algorithm implemented in a base class,
such virtual functions need not be public as it makes no sense
for them to be but it does not follow that this is the case
for all virtual functions.

No, I'm not thinking of the template method pattern. I'm
thinking of programming by contract.

After considering this I find in my own work reasons for
making virtual functions private/protected. At first
I was a little skeptical as it does involve more work/
infrastructure, but find a couple of reasons to adopt
this approach: primarily it allows users a customization
point that otherwise wouldn't be there. Previously the
framework directly called a generated Send function.
Now I'm replacing a sometimes virtual and always
computer generated Send function with a never virtual
and not computer generated, by me at least, Send
function. Then I'm adding two virtual functions --
a private SendTypeNum, and a protected SendMemberData.
http://webEbenezer.net/rbtree/rbtree_marshalling.hh
has examples of this.

A sample implementation of a user-written Send
function looks like:

void
Send(SendBuffer* buf, bool sendType) const
{
if (sendType) {
SendTypeNum(buf);
}
SendMemberData(buf);
}

The value for sendType is determined by
checking the context. If I find a pointer
use, vector<Shape*>, I'll set it to true.
I'll also set it to true for boost::intrusive
uses. rbtree<Base>, for example, can hold either
Base or derived types. Other than that I think
it's set to false. So if you want to marshall a
vector<Triangle>, type numbers won't be included
in the stream. (It's possible that the Triangle
instances are intended to be received in an
rbtree<Shape> container and you actually want the
type numbers. In that case though, you would have
to change your marshalling interface. Another
possibility would be to let users override the
default process for determining the sendType
value. Something like (vector<Triangle> @type_nums).)

Another benefit that I notice is the approach, while
a little more difficult to put in place, makes it
more difficult for someone to mess things up once
it is in place.


Brian Wood
Ebenezer Enterprises
http://webEbenezer.net
(651) 251-9384
 
J

Jorgen Grahn

On Feb 16, 11:49 am, "Leigh Johnston" <[email protected]> wrote:

[some attribution lines missing it seems]
Print may be an exception, but I'm not convinced. I think
various call inversion sequences are a definite exception. But
a lot of the experts disagree with me, and do insist that
virtual functions never be public. (Look at the std::streambuf
hierarchy, for example.)


The "subset" includes most of the experts. People like Herb
Sutter, for example.

I don't want to get involved in an old argument, but James, where can
I read more about the reasoning behind this? It was a new opinion to
me; I don't think I've come across it before.

I have some kind of dislike for run-time polymorphism in general.
Maybe this can help me understand *why* I dislike it.

/Jorgen
 
R

Robert Fendt

I don't want to get involved in an old argument, but James, where can
I read more about the reasoning behind this? It was a new opinion to
me; I don't think I've come across it before.

I have some kind of dislike for run-time polymorphism in general.
Maybe this can help me understand *why* I dislike it.

Not wanting to get involved too deeply here, as well, but: the
main scenario where public virtuals are _really_ problematic is
IIRC in presence of overloading. Consider the following:

class base
{
public:
virtual void foo(double)
{
std::cout << "base::foo(double)" << std::endl;
}
virtual void foo(int)
{
std::cout << "base::foo(int)" << std::endl;
}
};

class deriv : public base
{
public:
void foo(int)
{
std::cout << "deriv::foo(int)" << std::endl;
}
};

int main()
{
base b;
deriv d;

b.foo(1);
b.foo(1.0);
d.foo(1);
d.foo(1.0);

return 0;
}

Maybe somewhat surprisingly (at least to some people) in the
fourth call deriv::foo(int) is called instead of
base::foo(double). This is because the definition in the derived
class hides the one in the base class (and name resolution
happens before any virtual function resolution). So the compiler
just never 'sees' a definition 'foo(double)' in the derived class
(you have to import the base class's definition explicitely via
a 'using' statement).

Because of this (at least in the presence of overloading), it is
often considered good practice to make the virtual functions
private and access them through non-virtuals defined in the base
class (i.e., the 'non-virtual interface idiom'). Another reason
is that there is no 'override' attribute in current standard
C++, meaning that you can hide a virtual function signature
entirely by mistake (and the compiler will usually not even
issue a diagnostic). IIRC there will be support for 'override'
in C++ 0x, and VC++ 8.0+ already provides a non-standard
extension. But at the moment it remains a problem if you want to
write portable code.

Regards,
Robert
 
K

Keith H Duggar

I am neither inexperienced nor don't care about quality. You will find
public virtual functions in the following for example:

Boost
Qt (cross platform GUI framework library)
MFC (Windows C++ GUI framework library)
Symbian OS C++ library

I am sure there are many other libraries (both good and bad) which employ
public virtual functions

Are you saying neither experts nor people who care about quality are
involved in designing the above libraries?

One of the interesting and quite enjoyable aspects of the C++
community is that it continues to evolve and refine itself. As
the community gains experience (over many years) with certain
language features, the consensus opinion of "best" practices
changes. This (avoiding public virtuals) is one such example.

All of the libraries you mention above predate this newest
thinking and nobody in their right mind goes back and reworks
working and heavily tested code just for the purpose of making
it conform to the latest thinking. However, if you asked the
authors of the various Boost libraries if, knowing what they
know now, they would code things differently, many would say
(and in some cases have already said) yes.
You will find public virtual functions in the following books:

The C++ Programming Library, Bjarne Stroustrup.
Design Patterns: Elements of Reusable Object-Oriented Software, GoF

I am sure there are many other books (both good and bad) which
employ public virtual functions.
Again are you dissing these people saying they are not experts
nor care about quality?

Of course James isn't "dissing" them. He is talking about the
latest thinking. The community learns as time moves forward.
Do you think nothing has changed since the writing of those
books?? Of course it has! The C++ community is not a stagnant
pool of decaying ideas like some other language communities.
It is a vibrant and evolving one.
Herb Sutter is an expert yes but he also has opinions that
may differ from the opinions of others.

Maybe. But more likely you are just a little behind the times
or are a little to energetically naive.
Most of my virtual functions are protected/private, only a small number of
them are ever public and this is a side effect of me caring about quality.
Saying virtual functions should never be public is a bad *opinion* in my
view.

So far all you have offered is your "opinion". You haven't given
even the smallest logical reasoning for why you "feel" this way.
You haven't offered any rebuttal to the many arguments against
public virtual functions. You know, this isn't a new topic. What
do you have to contribute that is *new* apart from your "opinion"?
Once you accept that abstract interfaces (which contain public pure virtual
functions) are fine all bets are off and this rule of yours (and Herb's)
falls over.

Ignoring the fallacy of begging the question, it is totally
ignorant to claim that "abstract interface" == "*public* pure
virtual functions". Are you sure you want to make that claim?
Here is an abstract interface

class Foo
{
public :
void func ( ) { implFunc() ; }
protected :
virtual void implFunc ( ) = 0 ;
} ;

Notice there is no *public* pure virtual.

You know, really, do think it is that simple? Do think that
all these experts who have actually *thought* (as opposed to
just *felt*) about this topic forgot abstract bases? Do you
think all it takes if for Leigh to wave "abstract interfaces"
in their face and then all their reasoning just "falls over"?
Really man, get ahold of yourself. Don't be so vociferously
ignorant. At least learn about a topic before publicly
flailing and crying about it.

KHD
 
A

Alf P. Steinbach

* Leigh Johnston:
What utter garbage, get a clue. There is nothing wrong with public
virtual functions. There are times when it is necessary to change a
public virtual to a private/protected one but that does not rule out the
use public virtual functions.

It's an engineering issue and more like a context-dependent preference than an
absolute rule. C++ is geared towards large code sizes. Since a public virtual
routine can be called directly by client code, and can be overridden my many
subclasses, there's no convenient way to intercept calls to it. A non-virtual
routine provides an interception point, a hook, where pre- and post- conditions
can be checked, say. In a small program you may know about all derived classes
and their usage and it's no problem, that's true.

Essentially those who advocate "no public virtuals" have found that such
checking and whatever else one might put into the non-virtual public routine
saves work -- for others -- for their kind of large scale development.

If C++ had had support for Design By Contract (where you can outfit a virtual
with pre- and post- condition checking, and have that applied automatically for
an override) then one main argument against public virtuals would perhaps be
moot. But the DBC proposal stranded on technicalities, it turned out to be
extremely difficult to define things exactly for C++, in particular that during
a call of a routine the class invariant may be temporarily invalid, and how does
one diffentiate between internal calls and client-code calls? As with modules
and much else, in C++ one has to emulate the functionality of some hypothetical
higher level language by applying conventions systematically, and by avoiding
the troublesome cases that language support would have had to handle.


Cheers & hth.,

- Alf
 
A

Alf P. Steinbach

* Leigh Johnston:
An abstract interface (or "callback class") is just that: an interface,
it never contains any code

Your statement would be correct for Java. In C++ this is not so. And indeed a
main issue that C++ folks have with Java is that restriction of Java interfaces.

so adding a non-virtual wrapper is both
retarded and pointless. A mixin class is different and sometimes it
might be necessary to change a public virtual to a non-virtual and live
with a temporary BC break.

Accepting a "temporary BC break" (whatever BC is meant to stand for, Binary
Compatibility?) is a small scale development view, if anything. In general the
work involved in fixing a problem increases exponentially with how late in the
development/maintainance process the problem is addressed. So it pays to detect
and fix problems as early as possible, and that's what e.g. pre- and post-
condition checking does: it detects problems as early as possible, to save work.

There are also other (context dependent) arguments against public virtuals, in
particular a clean separation of concerns, separating the public interface from
the details of implementation so that that implementation can be freely changed,
which again is mostly an issue with larger code sizes.

Note that the paragraph above cannot be understood with the restricted Java
meaning of "interface" where an interface "never contains any code".


Cheers & hth.,

- Alf
 
J

James Kanze

An abstract interface (or "callback class") is just that: an
interface, it never contains any code so adding a non-virtual
wrapper is both retarded and pointless.

Not all abstract interfaces are callbacks. It would help in the
discussion if you didn't contuously mix different concepts. An
abstract interface defines a common interface to a set of
derived classes; in order for users to be able to program
against that interface, it must define a contract, in terms of
pre- and post-conditions. Traditionally, this has been done in
documentation; implementing it in the form of asserts in the
code is more convenient and more effective. To do so, however,
requires that the virtual functions be private or protected.
 
J

James Kanze

[...]
I don't want to get involved in an old argument, but James,
where can I read more about the reasoning behind this? It was
a new opinion to me; I don't think I've come across it before.

The article which brought the idea into mainstream programming
is http://www.gotw.ca/publications/mill18.htm, by Herb Sutter.
(I'm not sure I agree with all of this article---in particular,
he refers to this as the template method pattern. While is does
have some superficial simiarity to the template method pattern,
the goals and the rational behind it are considerably
different.) The idea wasn't particularly new even then: I think
Nathan Meyrs was one of the early proponents (and I suspect that
his influence in the library group of the C++ standards
committee is why std::streambuf has no public virtual
functions), although I don't know what his exact motivations
were. I'd also heard of it as a means of implementing PbC from
somewhere else, although I believe I was one of the early
proponents of this point of view. At any rate, Herb Sutter's
article is a good introduction, even if he missuses a word or
two, and sometimes overstates the case.
I have some kind of dislike for run-time polymorphism in
general. Maybe this can help me understand *why* I dislike
it.

There's nothing wrong with run-time polymorphism, when it solves
the problem at hand. Like everything else, however, it's not a
silver bullet. It adds complexity, and if that complexity
doesn't buy you something more valuable in return, you shouldn't
introduce it.
 
J

James Kanze

"James Kanze" <[email protected]> wrote in message

[...]
You run test cases to determine a ball park figure. Stack
size is deterministic otherwise it would be pointless to be
able to specify a stack size for a newly created thread for
example.

Stack size is not usually deterministic, at least not in a
single threaded program. In practice, stack size depends on the
current setting of ulimits, how much virtual memory is
available, and how much of that is being used by other
processes.

(Which brings up a second point: on many configurations I've
seen, the system will start thrashing and become unusable long
before you get a bad_alloc. And of course, more than a few OS's
will lie to you: operator new will return a valid pointer, but
when you try to use it, your program crashes. Or some other
program crashes.)
 
B

Brian

    [...]
I don't want to get involved in an old argument, but James,
where can I read more about the reasoning behind this?  It was
a new opinion to me; I don't think I've come across it before.

The article which brought the idea into mainstream programming
ishttp://www.gotw.ca/publications/mill18.htm, by Herb Sutter.


That article says, "Don't derive from concrete classes."
I don't know of a better way to support multiple versions
of a client. The versioning that some well-known
serialization libraries tout encourages people to use one
class and conditionals to support multiple versions within
the class. I think that approach is very weak and that
deriving from a concrete class is the better alternative.


Brian Wood
http://webEbenezer.net
(651) 251-9384
 
M

Michael Doubez

In a single threaded program the maximum stack size is usually a linker
setting.

Under Linux at least, the maximum stack size can be set. AFAIK it is
set by the administrator and can be redefined per shell. The linker
may requires a specific amount of stack but I don't see how it could
control the maximum size.
 Stack size *is* *usually* deterministic.

I don't known about that but it make sense to allow the creation of a
process with a stack size smaller than the default in case of
starvation (the OS can make the bet you won't use it).
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,995
Messages
2,570,230
Members
46,816
Latest member
SapanaCarpetStudio

Latest Threads

Top