difference between pointers

K

Keith Thompson

Tim Rentsch said:
Keith Thompson said:
Tim Rentsch said:
You're assuming that this:

d1 = &a; /* where d1 and a are both of type int */

specifies a conversion. Nothing in the C standard says or implies
that. The types int and int* are not assignment-compatible, so
the assignment is a constraint violation, requiring a diagnostic.
*If* the compiler chooses to generate an executable after issuing
the diagnostic, nothing in the C standard says anything about how
it behaves. [...] a conforming implementation could do *anything*.
[snip elaboration]

I'm not sure what your reasoning is to reach this conclusion.
Certainly this assignment has a constraint violation, but you're
saying, in effect, that it has undefined behavior. Presumably
the underlying reasoning is one of two things, namely:

A. There is a constraint violation, and any constraint
violation is necessarily undefined behavior; or

B. The assignment statement is trying to assign a pointer
type to an integer type, and nothing in the Standard
says how to do that, so there is undefined behavior.

IMO point A is incorrect, although I would agree the point
is debatable. Section 4 paragraph 3 says in part:

If a ``shall'' or ``shall not'' requirement that appears
outside of a constraint or runtime-constraint is violated,
the behavior is undefined.

IMHO A is correct (programs with constraint violations have
undefined behavior), though I'm not sure I can prove it.
This statement makes it reasonable to infer that a constraint
violation might _not_ result in undefined behavior in some
instances, as otherwise there is no point in excluding it.
("The exception proves the rule in cases not excepted.")

Perhaps, but only if the behavior is actually defined somewhere.

The behavior is defined by the semantics paragraphs of "Simple
assignment", which the expression in question must have been
identified as being. (This point expanded on below.)

Well, yes, but more on that below.
The problem with this reasoning is that the compiler must have
identified the expression as a simple assignment, because the
constraint only applies to simple assignments, and violating a
constraint requires a diagnostic. If we don't know that the
expression is a simple assignment, then there is no constraint
violation, and the compiler would be free to treat the program as
having undefined behavior, without issuing a diagnostic. This is
a classic "you can't have it both ways" kind of situation. The
only reasonable way out is to say the compiler must identify the
expression in question as a simple assignment, and proceed
accordingly.

I went a little overboard saying that it isn't a simple assignment.
Syntactically, it clearly is.

But once the compiler recognizes it as a simple assignment, how
far must it "proceed accordingly"?

Semantically, it violates a constraint. A constraint is by
definition a "restriction, either syntactic or semantic, by which
the exposition of language elements is to be interpreted". I admit
that's a bit vague, but what I get from that is that violating
a constraint invalidates the exposition. My reading of 6.5.16.1
is roughly "*If* the following constraints are satisified *then* a
simple assignment has the following semantics." That's not the only
possible reading, of course, but it's the only one I can think of
that causes the definition of "constraint" to make sense.

And it turns out I've asked about this in comp.std.c several times
over the years (apparently I've been posting here long enough that
I sometimes forget things I've discussed before):

https://groups.google.com/forum/?fromgroups=#!topic/comp.std.c/WNVXRSCqrGU
https://groups.google.com/forum/?fromgroups=#!topic/comp.std.c/3Nu8-vlJOEU
https://groups.google.com/forum/?fromgroups=#!topic/comp.std.c/M2UxT1wk1xQ

The threads are interesting reading if you're into that kind of thing.
Several people strongly stated their opinion that program that violate
constraints have undefined behavior, but I wasn't convinced that any of
them proved it.

One relevant post from 2007 (note that Doug Gwyn is a member of the
Committee):

https://groups.google.com/group/comp.std.c/msg/4661905eda66827?dmode=source&output=gplain&noredirect

Douglas A. Gwyn said:
Issuance of a "diagnostic" (meeting the implementation definition for
identification, required for conformance to the C standard) implies
rejection of the program (again, insofar as conformance is concerned).
If an implementation wants to proceed to do something further with
the translation unit, typically to continue processing to potentially
generate additional diagnostics but also to go ahead and produce
object code, then that is its business and it is allowed to do so.

I like that interpretation, and I wish it were clearly stated in the
standard.

I think the bottom line is that the standard is unclear about
the semantics, if any, of programs that violate constraints, and
particularly about the definition of "constraint". A clear statement
in the standard (even in a note) that any program that violates a
constraint, if it's accepted, has undefined behavior would settle
the issue. A clear statement of the opposite would do so as well.

*If* my interpretation is correct, then the cases where a description
of the semantics applies even when a constraint is violated (as in
the definition of simple assignment) can be explained as avoiding
redundancy. Yes the semantics section under "Simple assignment"
says that the RHS is converted to the type of the LHS; it doesn't
say *again* that the type must meet the constraints because that's
already been stated.

It seems odd to me to permit a compiler to reject a given construct,
but to impose specific requirements on its behavior if it's accepted.
A programmer cannotr reasonably depend on such a guarantee. On the
other hand, there are plenty of things in the standard that I find
odd, so that doesn't prove anything.

[SNIP]
To try to bring the conversation up a level: the essential point I
was trying to make is that the question is not black and white.
Reasonable people can disagree here.

I believe we've demonstrated that by being reasonable people
disagreeing about it. :cool:}
In the interest of giving a
fair presentation under such circumstances, I think it's better to
give a qualified statement rather than treating the matter as
completely settled.

Hmm. Feel free to assume that anything posted under my name is prefixed
with "In my opinion, ".

[snip]
 
N

Nick Bowler

There is no guarantee that a pointer to any type other than void can be
converted to any integer type other than _Bool. A pointer to void can
be converted to the types intptr_t, but only if the implementation
defines them, which the Standard does not require.


This is not quite correct. Any pointer type can be converted to any
integer type. In most cases, however, the result is implementation-
defined.
Is there any good reason why (intptr_t)&i isn't required to be the same
as (intptr_t)(void *)&i? (Crossposted to comp.std.c.)

The literal text of the standard only requires the conversion to
(u)intptr_t and back again to work for "any valid pointer to void".
As there is a similar requirement for conversions of other kinds of
object pointers to void * and back again, I can't think of a good
reason for why there is no requirement for conversion of other sorts of
object pointers to (u)intptr_t and back again to work.

If I encountered an implementation which provides (u)intptr_t but
conversions of non-void object pointers and back again did not work, I
would assume that this was either the result of malice on the part of the
implementators, or maybe they just ported some code from the DeathStation
9000.

Fortunately, in this situation the craziness would have to be documented
as the result is squarely in implementation-defined territory.
 
T

Tim Rentsch

army1987 said:
There is no guarantee that a pointer to any type other than void can be
converted to any integer type other than _Bool. A pointer to void can
be converted to the types intptr_t, but only if the implementation
defines them, which the Standard does not require.


Is there any good reason why (intptr_t)&i isn't required to be the same
as (intptr_t)(void *)&i? (Crossposted to comp.std.c.)


The pointer types (int *) and (void *) don't necessarily have
the same representation, or even the same size. The most
natural way of effecting a pointer-to-integer conversion is
just to copy the bits of the pointer into the bits of the
integer object representation. Obviously if we start with
a different representation, or even worse a different size,
that could affect the results.
 
T

Tim Rentsch

Keith Thompson said:
Tim Rentsch said:
Keith Thompson said:
[...]
You're assuming that this:

d1 = &a; /* where d1 and a are both of type int */

specifies a conversion. Nothing in the C standard says or implies
that. The types int and int* are not assignment-compatible, so
the assignment is a constraint violation, requiring a diagnostic.
*If* the compiler chooses to generate an executable after issuing
the diagnostic, nothing in the C standard says anything about how
it behaves. [...] a conforming implementation could do *anything*.
[snip elaboration]

I'm not sure what your reasoning is to reach this conclusion.
Certainly this assignment has a constraint violation, but you're
saying, in effect, that it has undefined behavior. Presumably
the underlying reasoning is one of two things, namely:

A. There is a constraint violation, and any constraint
violation is necessarily undefined behavior; or

B. The assignment statement is trying to assign a pointer
type to an integer type, and nothing in the Standard
says how to do that, so there is undefined behavior.

IMO point A is incorrect, although I would agree the point
is debatable. Section 4 paragraph 3 says in part:

If a ``shall'' or ``shall not'' requirement that appears
outside of a constraint or runtime-constraint is violated,
the behavior is undefined.

IMHO A is correct (programs with constraint violations have
undefined behavior), though I'm not sure I can prove it.

This statement makes it reasonable to infer that a constraint
violation might _not_ result in undefined behavior in some
instances, as otherwise there is no point in excluding it.
("The exception proves the rule in cases not excepted.")

Perhaps, but only if the behavior is actually defined somewhere.

The behavior is defined by the semantics paragraphs of "Simple
assignment", which the expression in question must have been
identified as being. (This point expanded on below.)

Well, yes, but more on that below.
The problem with this reasoning is that the compiler must have
identified the expression as a simple assignment, because the
constraint only applies to simple assignments, and violating a
constraint requires a diagnostic. If we don't know that the
expression is a simple assignment, then there is no constraint
violation, and the compiler would be free to treat the program as
having undefined behavior, without issuing a diagnostic. This is
a classic "you can't have it both ways" kind of situation. The
only reasonable way out is to say the compiler must identify the
expression in question as a simple assignment, and proceed
accordingly.

I went a little overboard saying that it isn't a simple assignment.
Syntactically, it clearly is.

But once the compiler recognizes it as a simple assignment, how
far must it "proceed accordingly"?

Semantically, it violates a constraint. A constraint is by
definition a "restriction, either syntactic or semantic, by which
the exposition of language elements is to be interpreted". I admit
that's a bit vague, but what I get from that is that violating
a constraint invalidates the exposition. My reading of 6.5.16.1
is roughly "*If* the following constraints are satisified *then* a
simple assignment has the following semantics." That's not the only
possible reading, of course, but it's the only one I can think of
that causes the definition of "constraint" to make sense.

My reading is that it places limits on what must be accepted by
the compiler but doesn't otherwise change the applicable semantic
descriptions. This shouldn't be a strange idea, as after all it
is what people expect for environmental limits -- the compiler
may reject any program that exceeds an environmental limit, but
if the program is accepted then the compiler better follow the
given semantic descriptions. I think the difference is primarily
one of expectation -- we expect a constraint violation will
result in a program being rejected, whereas we hardly ever expect
a program will be rejected because an environmental limit was
exceeded (and certainly the hope is that the program will not be
rejected!).
And it turns out I've asked about this in comp.std.c several times
over the years (apparently I've been posting here long enough that
I sometimes forget things I've discussed before):

https://groups.google.com/forum/?fromgroups=#!topic/comp.std.c/WNVXRSCqrGU
https://groups.google.com/forum/?fromgroups=#!topic/comp.std.c/3Nu8-vlJOEU
https://groups.google.com/forum/?fromgroups=#!topic/comp.std.c/M2UxT1wk1xQ

The threads are interesting reading if you're into that kind of thing.
Several people strongly stated their opinion that program that violate
constraints have undefined behavior, but I wasn't convinced that any of
them proved it.

I expect there is some interesting reading there. I would be
more enthusiastic if the "improved" Google groups interface
weren't so abysmally bad. :(
One relevant post from 2007 (note that Doug Gwyn is a member of the
Committee):

https://groups.google.com/group/comp.std.c/msg/4661905eda66827?dmode=source&output=gplain&noredirect



I like that interpretation, and I wish it were clearly stated in the
standard.

Note that the long paragraph doesn't say one way or the other as
to whether the proceeding compilation is obliged to honor other
semantic descriptions that are well-defined. The author may have
a position on the subject, but I don't think this paragraph
clearly expresses it.
I think the bottom line is that the standard is unclear about
the semantics, if any, of programs that violate constraints, and
particularly about the definition of "constraint". A clear statement
in the standard (even in a note) that any program that violates a
constraint, if it's accepted, has undefined behavior would settle
the issue. A clear statement of the opposite would do so as well.

I agree 100%.
*If* my interpretation is correct, then the cases where a description
of the semantics applies even when a constraint is violated (as in
the definition of simple assignment) can be explained as avoiding
redundancy. Yes the semantics section under "Simple assignment"
says that the RHS is converted to the type of the LHS; it doesn't
say *again* that the type must meet the constraints because that's
already been stated.

It seems odd to me to permit a compiler to reject a given construct,
but to impose specific requirements on its behavior if it's accepted.
A programmer cannotr reasonably depend on such a guarantee. On the
other hand, there are plenty of things in the standard that I find
odd, so that doesn't prove anything.

It doesn't seem so odd to me because there are at least two other
cases where the Standard does just that, those being exceeding an
environment limit and use of a conditionally present feature (of
which C11 has too many IMO, but that is a separate discussion).
[SNIP]
To try to bring the conversation up a level: the essential point I
was trying to make is that the question is not black and white.
Reasonable people can disagree here.

I believe we've demonstrated that by being reasonable people
disagreeing about it. :cool:}

Just so. :)
Hmm. Feel free to assume that anything posted under my name is prefixed
with "In my opinion, ".

There are two reasons why I hope you'll reconsider this response.

First, I don't think it's really an accurate description all the
time. Some of the things you say you consider (I believe) to be
simply indisputable, ie, that no reasonable (and informed) person
would disagree. For example, the printf() statement you mentioned
earlier as not being strictly conforming -- this is not just an
opinion, as it cannot be reasonably disputed. It is valuable to
distinguish these two kinds of situations.

Second, and probably more important, I'm not the only audience for
your comments. Lots of people read what you have to say, and it
helps them understand not just the language but also the culture
of people who are interested in the language definition, and how
firm or weak the consensus is in different areas. IMO you would
be doing them a disservice to voice statements of opnion as if
they are statements of fact, or to not distinguish between cases
where you are giving a statement of opinion and where you feel
there is overwhelming consensus on a particular issue. By all
means, in any case where you feel there is overwhelming consensus,
please go ahead and respond unequivocally. In other cases,
however, where there is not such a strong consensus, and you
think reasonable people can disagree, I hope you'll agree that
it is better to express such remarks in a qualified way, so
readers get a more rounded view of the discussional landscape.
 
J

Jorgen Grahn

Why -O2 rather than -O3?

I included the optimization option because at least historically you
could get more warnings about dead code/data that way.

I could have written -Os or -O3, but some sources say the higher
levels may produce worse code due to too aggressive inlining etc, and
that you should measure before applying them (if performance is
important on that level).

/Jorgen
 
N

Nick Keighley

hello all,

I get this doubt about this behavior of C.

consider the following code.

int a,b,d1,d2,diff;

d1= &a;
d2 = &b;
diff = &a-&b;
printf("\nDifference between address: %d", diff);
diff = d1-d2;
printf("\nDifference between address(stored in integers): %d", diff);

ideally, both printf should result same output.
but as expected diff = d1-d2 gives 4.
but diff = &a-&b gives 1.

Why so when C doesn't really support operator overloading.


what compiler successfully compiles this code?
 
K

Keith Thompson

Nick Keighley said:
what compiler successfully compiles this code?

gcc does, once you add the obvious boilerplate.

It "successfully" compiles it in the sense that it generates an
executable that actually prints something. It does produce the required
diagnostics, but they're merely warnings. (The "-pedantic-errors"
option makes them fatal errors.)
 
T

Tim Rentsch

Jorgen Grahn said:
You may also want to try

gcc -std=c99 -pedantic-errors ...
gcc -std=c11 -pedantic-errors

I strongly recommend

-std=something -Wall -Wextra -pedantic -O2

[snip elaboration]

I used to be a fan of -Wall and -W (aka -Wextra). Now,
not so much. In no particular order, my complaints are:

* I believe good programming practice normally treats
warnings as errors (ie, -Werror in gcc). Using -Wall
or -Wextra sometimes works at cross-purposes to that.

* The set of warnings included in -Wall or -Wextra include
some that are purely stylistic and have no bearing on
code correctness.

* IMO some of the style warnings are not just neutral but
actually bad.

* Some -Wall/-Wextra warning conditions can be removed
selectively with other option settings, but some can't.

* In cases where a warning class indicates a potential
code problem, there often are too many false positives.
This has the effect of training people either to muck
their code about just to shut up the compiler, or to
ignore warnings; neither of these is a good thing.

* The documentation is wrong, at least in the sense of
being incomplete -- there are warnings that come out
under -Wall or -Wextra that don't correspond to any
of the described warning conditions (and hence there
is no way to turn them off, at least not that I could
find).

* Moving target - the set of warnings included under -Wall
or -Wextra in one version of gcc might change in the next
version of gcc. It is very disconcerting to find code
thought to be completely clean suddenly start generating
warnings when compiled in a new environment.

There are lots of individual warnings in gcc that are quite
valuable, eg, "variable might not be initialized before use."
But rather than using the -Wall/-Wextra shotgun to turn
everything on, it's better to turn the high quality warning
conditions on individually, and not use the others. And also
-Werror, or at least -pedantic-errors (and please let the GNU
people know when -pedantic-errors gives an error for something
that is only undefined behavior, and not a syntax/constraint
violation).
 
T

Tim Rentsch

Jorgen Grahn said:
You may also want to try

gcc -std=c99 -pedantic-errors ...
gcc -std=c11 -pedantic-errors

I strongly recommend

-std=something -Wall -Wextra -pedantic -O2

[...]

Another complaint I forgot:

* Usually turning on optimization (eg, -O2) will allow
additional warning conditions to be checked, but it
also can have the effect of removing some warnings
that are generated without it.
 
G

glen herrmannsfeldt

(snip of things I already agree with)
* In cases where a warning class indicates a potential
code problem, there often are too many false positives.
This has the effect of training people either to muck
their code about just to shut up the compiler, or to
ignore warnings; neither of these is a good thing.

Yes. People often consider the benefit of adding a warning,
but not its cost. Looking through false positives is a
definite cost.
* The documentation is wrong, at least in the sense of
being incomplete -- there are warnings that come out
under -Wall or -Wextra that don't correspond to any
of the described warning conditions (and hence there
is no way to turn them off, at least not that I could
find).
* Moving target - the set of warnings included under -Wall
or -Wextra in one version of gcc might change in the next
version of gcc. It is very disconcerting to find code
thought to be completely clean suddenly start generating
warnings when compiled in a new environment.

I suppose that could be fixed with a warning version option,
not to generate warnings added since a specific version of
the compiler. Not likely to be added, though.
There are lots of individual warnings in gcc that are quite
valuable, eg, "variable might not be initialized before use."

I somewhat like the Java idea on this. Any variable that the
compiler can't detect is initialized before use is an error.

Sometimes the compiler doesn't see something that you know,
but reasonably often it is right.
But rather than using the -Wall/-Wextra shotgun to turn
everything on, it's better to turn the high quality warning
conditions on individually, and not use the others. And also
-Werror, or at least -pedantic-errors (and please let the GNU
people know when -pedantic-errors gives an error for something
that is only undefined behavior, and not a syntax/constraint
violation).

I suppose those could be selected separately if one wanted them.

-- glen
 
T

Tim Rentsch

glen herrmannsfeldt said:
I somewhat like the Java idea on this. Any variable that the
compiler can't detect is initialized before use is an error.

Java pays a price for this choice, namely, the algorithm for
deciding whether a variable has been initialized therefore
must be included as part of the language definition. (And
whatever algorithm is chosen, it can't be right all the time,
because the problem is undecidable.) That choice may be a
good choice for Java, but IMO it would be a bad one for C.
 
G

glen herrmannsfeldt

Java pays a price for this choice, namely, the algorithm for
deciding whether a variable has been initialized therefore
must be included as part of the language definition. (And
whatever algorithm is chosen, it can't be right all the time,
because the problem is undecidable.)

I have wondered about that. Seems that you might get away
with improving the algorithm, such that old programs would
still compile. New ones might not compile on old compilers,
but then they might not anyway if they use new features.

But as for the undecidable, the default is that it is
an error. That is, if the compiler can't prove that it is
defined before use, it is an error.

That is only for scalar variables. Arrays are always initialized
to zero when allocated.
That choice may be a good choice for Java, but IMO
it would be a bad one for C.

I agree.

(Besides, it is a little late now.)

-- glen
 
J

Jorgen Grahn

Jorgen Grahn said:
Yep it did give warning. Thnx for pointing the correct gcc option for
correct ansi parsing.

You may also want to try

gcc -std=c99 -pedantic-errors ...
gcc -std=c11 -pedantic-errors

I strongly recommend

-std=something -Wall -Wextra -pedantic -O2

[snip elaboration]

This part of the elaboration is important:
It's a good start for new code [...]

I didn't intend to suggest the flags above as the universal solution
to all problems.

/Jorgen
 
T

Tim Rentsch

glen herrmannsfeldt said:
Tim Rentsch said:
[snip]
There are lots of individual warnings in gcc that are quite
valuable, eg, "variable might not be initialized before use."

I somewhat like the Java idea on this. Any variable that the
compiler can't detect is initialized before use is an error.

Java pays a price for this choice, namely, the algorithm for
deciding whether a variable has been initialized therefore
must be included as part of the language definition. (And
whatever algorithm is chosen, it can't be right all the time,
because the problem is undecidable.)

I have wondered about that. Seems that you might get away
with improving the algorithm, such that old programs would
still compile. New ones might not compile on old compilers,
but then they might not anyway if they use new features.

A new language definition could choose a different algorithm, or
specify an initialization rule for all declared variables, or
allow uninitialized variables, or some combination of the above.
But compilers have to do what the language definition says, for
whatever language they are implementing.
But as for the undecidable, the default is that it is
an error.

What you mean is that the language definition for Java has
chosen a conservative algorithm that never mis-identifies an
uninitialized variable as having been initialized. And I
believe that's right.
That is, if the compiler can't prove that it is defined
before use, it is an error.

The compiler is obliged to do whatever the language definition
says, which completely defines the result, with no latitude
left to the compiler. Anything else is not consistent with
the "Write once, run anywhere" philosophy that Java espouses.
 
T

Tim Rentsch

Jorgen Grahn said:
Jorgen Grahn said:
On Tue, 2013-04-02, Tim Rentsch wrote:

Yep it did give warning. Thnx for pointing the correct gcc option for
correct ansi parsing.

You may also want to try

gcc -std=c99 -pedantic-errors
...
gcc -std=c11 -pedantic-errors

I strongly recommend

-std=something -Wall -Wextra -pedantic -O2

[snip elaboration]

This part of the elaboration is important:
It's a good start for new code [...]

Yes, my snipping here was a bit overzealous. Sorry about that.

However, that qualifier doesn't lessen my reaction -- if anything
it intensifies it. Using -Wall and -Wextra on new code is the
worst place to use them, because that's where they are most
likely to nudge people into bad habits, or cause problems later.
It is much better to use -Wall/-Wextra sparingly, on source code
that is more mature, as an independent sanity check or quality
assessment step. Using -Wall or -Wextra on a regular basis,
especially as a default for new code, is IMO a bad practice and
one likely to lead to poor coding habits.
I didn't intend to suggest the flags above as the universal
solution to all problems.

Certainly that was not my impression, and I hope my comments
didn't suggest otherwise.
 
J

Jorgen Grahn

Jorgen Grahn said:
On Tue, 2013-04-02, Tim Rentsch wrote:

Yep it did give warning. Thnx for pointing the correct gcc option for
correct ansi parsing.

You may also want to try

gcc -std=c99 -pedantic-errors
...
gcc -std=c11 -pedantic-errors

I strongly recommend

-std=something -Wall -Wextra -pedantic -O2

[snip elaboration]

This part of the elaboration is important:
It's a good start for new code [...]

Yes, my snipping here was a bit overzealous. Sorry about that.

However, that qualifier doesn't lessen my reaction -- if anything
it intensifies it. Using -Wall and -Wextra on new code is the
worst place to use them, because that's where they are most
likely to nudge people into bad habits, or cause problems later.
It is much better to use -Wall/-Wextra sparingly, on source code
that is more mature, as an independent sanity check or quality
assessment step. Using -Wall or -Wextra on a regular basis,
especially as a default for new code, is IMO a bad practice and
one likely to lead to poor coding habits.
I didn't intend to suggest the flags above as the universal
solution to all problems.

Certainly that was not my impression, and I hope my comments
didn't suggest otherwise.

Ok, good. But we're still on opposide sides: IMO -Wall, -Wextra and
-pedantic lead to *good* habits in the usual case.

(There's one class of warnings I would agree are problematic, and
that's the ones for unused static functions, parameters and variables.
Useful, but when they don't alert you to an actual bug, it's often
hard to do anything about them without making the code worse. And yes,
I refuse to do that just to please the compiler.)

Others will have to make up their own minds, I guess.

/Jorgen
 
N

Noob

Tim said:
Usually turning on optimization (eg, -O2) will allow
additional warning conditions to be checked, but it
also can have the effect of removing some warnings
that are generated without it.

Do you have an example?
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
474,076
Messages
2,570,565
Members
47,201
Latest member
IvyTeeter

Latest Threads

Top