knowing what and when to feature test

G

Garrett Smith

Garrett said:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Please read that again: "stopped at CR phase in 2007.

"CR" Stands for "Candidate Recommendation".

| Candidate Recommendation (CR)
| A Candidate Recommendation is a document that W3C believes has been
| widely reviewed and satisfies the Working Group's technical
| requirements. W3C publishes a Candidate Recommendation to gather
| implementation experience.

A recommendation (REC) can be used as a normative reference.
http://www.w3.org/2005/10/Process-20051014/tr.html#rec-track-doc
(in contrast to a CR, a REC can be a normative ref).
 
T

Thomas 'PointedEars' Lahn

Garrett said:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Please read that again: "stopped at CR phase in 2007.

"CR" Stands for "Candidate Recommendation".

[...]
A recommendation (REC) can be used as a normative reference.
http://www.w3.org/2005/10/Process-20051014/tr.html#rec-track-doc

But a CR cannot.
If you look at the conclusion I wrote:-

Your conclusion is irrelevant. You should have never cited this document
in that manner in the first place (whom are you trying to fool here?):

| Status of this Document
|
| [...]
| Publication as a Candidate Recommendation does not imply endorsement by
| the W3C Membership. This is a draft document and may be updated, replaced
| or obsoleted by other documents at any time. It is inappropriate to cite
| this document as other than work in progress.


PointedEars
 
G

Garrett Smith

Thomas said:
Garrett said:
Thomas said:
Garrett Smith wrote:
Thomas 'PointedEars' Lahn wrote:
IMHO, it is rather unlikely that, since the same parser and layout
engine would be used as for XHTML 1.0 (when served declared as
application/xhtml+xml or another triggering media type),
`document.body'
would not be available in XHTML (Basic) 1.1. Especially as XHTML
(Basic) 1.1 defined the `body' element in the required Structure
Module that all XHTML 1.1-compliant user agents MUST support.
There is "WICD Mobile 1.0" that suggests a subset of HTML DOM for
mobile devices. That subset does not include a body property.

Seems to have stopped at CR phase in 2007.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Please read that again: "stopped at CR phase in 2007.

"CR" Stands for "Candidate Recommendation".

[...]
A recommendation (REC) can be used as a normative reference.
http://www.w3.org/2005/10/Process-20051014/tr.html#rec-track-doc

But a CR cannot.

Yes, that's a true statement, but beside the point.

The document linked was not used as a normative reference.
Your conclusion is irrelevant. You should have never cited this document
in that manner in the first place (whom are you trying to fool here?):

The only person that seems to be fooled is *you*. The CR is not a
normative reference, so arguing to say that I am wrong by using it as is
a normative reference seems to be on the wrong track.

I don't see where you're going with this (probably nowhere).

I want to know if any implementations are following that CR, and also
what the rationale for omitting document.body there. I do not expect you
to answer those questions although someone else might be able to.
 
G

Garrett Smith

Jake said:
[...]
I want to know if any implementations are following that CR, and also
what the rationale for omitting document.body there. I do not expect you
to answer those questions although someone else might be able to.

It might just seem that you quoted that CR as an argument and support
for "document.body might be undefined" (when it's not a bug).

What part of "Seems to have stopped at CR phase in 2007" sounds like an
argument?

Or is the argument in the conclusion I wrote:

| I don't know what the rationale is for omitting document.body, nor do
| I know which implementations actually do that. Anyone who is able to
| fix that, please do.

I don't see it.

Does any browser implement WICD Mobile 1.0 and use the DOM Level 2 HTML
Subset?

What do you think the reason for omitting document.body from that draft is?

Don't have answers? Join the club.
 
T

Thomas 'PointedEars' Lahn

Garrett said:
I meant what I wrote: Don't expect nonstandard behavior from standard
properties.

Nonsense. Obviously you have not paid attention to previous discussions.
Does setAttribute()/getAttribute() ring a bell?
Code that is expecting XHR to work over file: protocol is expecting
nonstandard behavior (though technically XHR itself is nonstandard,
though it is a w3c WD).

Nonsense. There is _nothing_ that says XHR must not be used for `file:'.
One should be aware, though, that certain conditions must apply for this
to work. For example, in MSXML it requires the use of ActiveXObject().
Code that is expecting assignment to DOM domstring properties to be
converted to string is expecting nonstandard behavior.

Nonsense. The API Specification does not say how implementations should
behave there. While there is indication that it would be unwise to rely on
implicit type conversion, that is certainly not based on an expectation of
nonstandard behavior.
Code that is expecting typeof document.images == "object" is expecting
nonstandard behavior.

Nonsense. `document.images' is a reference to a host object. Host objects
are free to implement whatever `TypeOf' algorithm they want per ES3F, with
any possible string value as a result. While ES5 limits that to not
include "object" (and other values), there are more implementations that
implement ES3F than ES5 (of which it is not entirely clear what it is its
current status). That is, it is certainly not safe to have the
aforementioned expectation, but it is also NOT an expectation of non-
standard behavior.
Code that uses malformed, nonconformant HTML is expecting nonstandard
behavior.

Nonsense. The HTML standard makes recommendations as to how parsers are
supposed to handle invalid markup. But again, it is not wise to rely on
that as those are only recommendations.
Regarding XHR working draft explicitly states that protocol other than
http and https is outside of the scope of the spec (or
implementation-dependent). The code that is expecting XHR to work over
file: protocol is expecting nonstandard behavior.

You don't get it, do you? Working drafts are NOT to be cited as reference
material, as something else than "work in progress". They are NOT
standards. While that already follows from the W3C Process Document and
common sense, the very text of the Working Draft you are referring to
explicitly says so in its "Status of this Document" section.


PointedEars
 
D

David Mark

It was an old "bug" of mozilla where document.body was undefined. That
got fixed around 2003.


There is "WICD Mobile 1.0" that suggests a subset of HTML DOM for mobile
devices. That subset does not include a body property.

Seems to have stopped at CR phase in 2007.http://www.w3.org/TR/WICDMobile/#dom

http://www.w3.org/TR/WICDMobile/#dom-html-ref
| interface HTMLDocument : Document {
| NodeList getElementsByName(in DOMString elementName);
| };

I don't know what the rationale is for omitting document.body, nor do I
know which implementations actually do that. Anyone who is able to fix
that, please do.

What does it matter? Just use a wrapper that tries document.body
first, then gEBTN, etc. You need it for some browsers and parse modes
(e.g. XML). Return null if all fails. Substitute a simpler wrapper
when the context allows:-

function getBodyElement(doc) {
return (doc || document).body;
}

var body = getBodyElement();

if (body) { // Needs a reference to BODY to work
...
}
 
D

David Mark

[...]
...David Mark's "My Library" and Garrett Smith's "APE
JavaScript library" are certainly helping in getting a feel for how an
experienced developer may apply those principles. so maybe my naive
attempts will eventually reach maturity.

Thanks for the mention. My Library was just a lark, but it only took
a week or so to turn it into a condender. It's the only JS library or
framework that allows for progressive enhancement and certainly the
only major one (in terms of scope, not marketing efforts) that is free
of browser sniffing. From recent testimg, it has been shown to
degrade gracefully in browsers that were released before the turn of
the century. It's quite capable on dinosaurs like IE5 and NN1. Less
so with Opera < 7, but then it degrades in a way that cues calling
apps to avoid hazards. Try that with jQuery or the like and you'll
see immediately why they "deprecate" all but the latest versions of
major browsers (it's all they have time to keep up with as they have
to keep changing their code due to past unfortunate inferences). ;)

The ES end of it is fairly trivial as most browsers are using an old
and established standard. The DOM stuff was chaos for the first five
years of the past decade and then settled down (just in time for JS
library authors to come along and introduce their own misconceptions
and mixups). The My Library test page demonstrates this quite nicely
(with companion features showing how the libraries of each era managed
to foul everything up).
from searching previous discussion on this list i think it's clear none
of the "brand name" (e.g. Prototype, jQuery, Dojo etc.) scripts have
much of anything to offer? are there any other scripts worth studying or
are there things in the "brand name" scripts worth looking at?

The "brand name" scripts (AKA "major") are simply names from the
past. No, they don't have anything to offer. They wouldn't have been
acceptable in 2005 (around when most were conceived). It's been half
a decade of demonstrable futility since then.

[...]
perhaps i'm particularly masochistic but regardless of extinction
Netscape Navigator 2 (and other dinosaurs) is just the sort of
environment i'm interested in.

People ask me why I bother to test ancient Opera browsers, handhelds,
Mozilla, etc. It's because ancient browsers have degradation paths
too. Virtually any old browser is a good test and can expose flaws in
the feature detection that might adversely affect other browsers (e.g.
lesser browsers in mobile devices, gaming consoles, etc). The only
people who will tell you to test just in the latest (or modern)
browsers are people are hiding from the fact that their scripts are
bunk. If you can't support the past, what hope is there for the
future?
my interest is in determining if pages
can be made that work (and take advantage of) modern ECMAScript
implementations while not completely falling over (i.e. creating script
error windows) in even the oldest dynamic browsers.

You bet. My Library is a clinic on that.

[...]
my tests with Netscape Navigator 2.0.2 seem to confirm your assertion
regarding `Number.prototype.toString()`.

In reality, NS2 (and IE3) are too old to be practical tests. NN4 is
pretty screwy too. But there's no excuse for a Website that blows up
in IE5.5 or Opera 7. The currently popular libraries (which are only
popular among people who have no idea what they are doing) are lucky
to support IE8 by itself (most can't). For years, jQuery blew up in
IE when ActiveX was disabled (despite the public outcry). To this
day, jQuery is compeletely oblivious to quirks and compatibility
modes. I know it sounds crazy, but the people who "design" Websites
don't test for shit. They want to "drop" IE6/7 because they could
never really do their jobs with them in use. Yes, these are the same
people who "argue" endlessly that they are dealing with "Real World"
issues (where nobody uses IE ever I guess).

Yes, Operas "Portal" throws several errors in Opera 6, which is pretty
ludicrous considering they wrote both the browser and the
documents. :)
as soon as i got your message pointing to the evolt browser archive the
next thing i did was download Netscape Navigator 2.0.2. except in my
case (a Windows XP virtual machine in VMWare) the default homepage
actually crashed the browser;

That can happen too. Modularity helps as it allows removing a piece
at a time until the crash goes away.
it took a few tries to be fast enough to
hit the stop button before the crash.

LOL. The home page URI (and everything else) is in the Registry. ;)
now that i have a blank homepage
it seems to be reliable enough for testing; although every single public
web page i've pointed it at since threw multiple script errors or worse
(e.g. <http://gmail.com/>).

Doubled-over LOL. Gmail? If My Library can be considered a good
example of cross-browser scripting (call it an 8 at this point), GMail
is a colossal embarassment to anyone who ever worked on it (I can't
imagine anyone would admit to it). Call it minus about a million.
And no, I'm not exaggerating for effect. That site is the poster
child for browser scripting futility, with only other Google apps in
serious competition. So I suggest you test something other than
Google-authored sites.

Here's one I use as a yardstick for older (but not too ancient)
browsers:-

http://www.hartkelaw.net/

As mentioned, the My Library builder and test page are also useful for
this.
 
J

John G Harris

Nonsense. The HTML standard makes recommendations as to how parsers are
supposed to handle invalid markup. But again, it is not wise to rely on
that as those are only recommendations.
<snip>

Which HTML standard is that ? The current standard makes no such
recommendations.

John
 
D

David Mark

I had wanted to ask about that already. I keep reading that the
particular problem here is only the XMLHttpRequest in Internet
Explorer 7, which exists and works fine, but refuses to touch
any URL that begins with "file:".
Right.


If yes, then it would be one of the rare examples where browser
version sniffing is appropriate, as you cannot elegantly
feature-test this. (Firing off an Ajax request just for the
purpose of feature-testing is out of the question, I suppose.)

Not version sniffing, but an object inference is in order. As we know
that all ActiveX versions support file:, we can support file: if we
can create an ActiveX XHR object. See createXmlHttpRequest in My
Library.
What most people seem to do is cruder. They check the URL for
the file: prefix and the browser for the presence of MSXML, and
if both are present, they use MSXML even in browsers where
XMLHttpRequest is perfectly usable, like in Internet Explorer 8.

They do what?
Do I see things correctly? The example below is an example for
the crude method without browser sniffing.

Not exactly.

[...]
// Constructor for universal XMLHttpRequest:
function XHR() {
// For support of IE before version 6 add "Msxml2.XMLHTTP":
var modes = ["Msxml2.XMLHTTP.6.0", "Msxml2.XMLHTTP.3.0"];

Oh, this is what you meant by MSXML. These are programmatic ID's for
ActiveX components. You can be sure that XMLHttpRequest uses the same
components behind the scenes. ;)
// If working with the local file system, try Msxml2
// first, because IE7 also has an XMLHttpRequest,
// which, however, fails with the local file system:
Right.

if (location.protocol === "file:" && ActiveXObject)

Poor feature detection. Use isHostMethod or the like.
for (var i = 0; i < modes.length; i++)
try {
return new ActiveXObject(modes);
} catch (ignore) {}
else if (XMLHttpRequest) return new XMLHttpRequest();


Same and missing try-catch (this constructor can be disabled by the
user, just like ActiveXObject). I wonder if jQuery remembered to wrap
that one too. :)
else for (var i = 0; i < modes.length; i++)
try {
return new ActiveXObject(modes);
} catch (ignore) {}


A bit redundant. :)

I don't like that it runs these tests every time through, but the
basic ideas are sound. Checking the UA string would be far more
crude, not to mention disaster-prone. ;)
 
G

Garrett Smith

Thomas said:
Nonsense. Obviously you have not paid attention to previous discussions.
Does setAttribute()/getAttribute() ring a bell?

No. I actually have paid attention to most discussions, but don't know
which one you are referring to now. Or why.

If I had to guess why, my guess would be that you want to try to prove
that you are correct about calling "nonsense".
Nonsense. There is _nothing_ that says XHR must not be used for `file:'.

You are setting up a straw man. Nobody said that XHR must not be used
for file.

The problem with using XHR for file is that doing so expects behavior
out of an object where that has been specified as being
implementation-dependent. It is a false expectation that is based on
something not stated.
One should be aware, though, that certain conditions must apply for this
to work. For example, in MSXML it requires the use of ActiveXObject().


Nonsense. The API Specification does not say how implementations should
behave there.

Assigning anything other than a domstring for the value in an assignment
to a property that is specified as domstring is unspecified.

Expecting type conversion to occur on that value is expecting
nonstandard behavior.

While there is indication that it would be unwise to rely on
implicit type conversion, that is certainly not based on an expectation of
nonstandard behavior.

That is a false statement.
Nonsense. `document.images' is a reference to a host object. Host objects
are free to implement whatever `TypeOf' algorithm they want

If a host object may result in something other than "object", it would
certainly make very little sense to expect the result to be "object".

It is not hard to find an implementation where typeof document.images
!== "object". Many version of Safari are.

A program that has an expectation that typeof document.images !==
"function" or typeof document.images === "object" will have problems.
This NG saw such examples c2007.

A program that expects typeof document.images === "object" is making a
false expectation. Such behavior is nonstandard and should not be relied
upon.

In ES5, a callable object, host or otherwise, should result "function"
and so in a conformant implementation of that, where document.images is
callable, then typeof document.images === "function" must be true.

per ES3F, with
any possible string value as a result. While ES5 limits that to not
include "object" (and other values), there are more implementations that
implement ES3F than ES5 (of which it is not entirely clear what it is its
current status). That is, it is certainly not safe to have the
aforementioned expectation, but it is also NOT an expectation of non-
standard behavior.


Nonsense. The HTML standard makes recommendations as to how parsers are
supposed to handle invalid markup. But again, it is not wise to rely on
that as those are only recommendations.

Other than a few trivial suggestions, HTML 4 does not define how the
parser is supposed to handle invalid markup.

Regarding non-conformant behavior, HTML 4 makes some suggestions for
that, too, and in that very same appendix.

Expecting the browser to perform that error correction is expecting
nonstandard behavior. Error-correction beyond those few suggestions is
not standard. A program that requires non-standard behavior is at great
risk of failing.
You don't get it, do you? Working drafts are NOT to be cited as reference
material, as something else than "work in progress". They are NOT
standards. While that already follows from the W3C Process Document and
common sense, the very text of the Working Draft you are referring to
explicitly says so in its "Status of this Document" section.
The WD states that other protocols are implementation-dependent, and so
I don't see why you're insisting that the WD is wrong and that file
would have to be supported.

The best a program could hope for is that the WD is correct and
correctly implemented.

Expecting file protocol to be supported means that either

A) The WD is incorrect, or B) All browsers that support XMLHttpRequest
must implement the behavior that is defined as non-standard, and must do
it in the way that you imagine to be correct.

The XMLHttpRequest specification is currently a working draft, and so
statements about standard behavior for XHR cannot use that draft as a
normative reference.

The XMLHttpRequest specification is also highly visible, actively
maintained, mature, and has other specifications as dependencies. If
there is a problem with not supporting file: I suggest you post that the
the relevant w3c mailing list and explain your proposed solution so that
the problem can be corrected.

Until such changes to the occur, expecting certain behavior of XHR using
file: protocol is a false expectation.
 
T

Thomas 'PointedEars' Lahn

John said:
You ought not to link to an obsolete version of the standard.

You don't know what you are talking about. Standard-wise it really doesn't
get any more current than "html4/" at the moment.
Section B.1 is talking about semantic errors : elements, attributes,
attribute values, and character entities that are not recognised.

And now pray read its title.
Garrett was talking about "malformed, nonconformant HTML", which I take
to mean syntax errors such as <style></title> .

Your conclusions as to what could have been meant are irrelevant. There
*is* a standard that makes recommendations what to do about invalid markup
which includes any notion of "malformed, nonconformant HTML". You have
been proven wrong; stop whining and learn to live with it.


PointedEars
 
E

Eric Bednarz

Thomas 'PointedEars' Lahn said:
John G Harris wrote:
You don't know what you are talking about. Standard-wise it really doesn't
get any more current than "html4/" at the moment.

Somebody doesn’t know what she is talking about for sure. ‘HTML 4’ is a
W3C recommendation, the only relevant HTML *standard* is ISO/IEC
15445:2000.
And now pray read its title.

Now pray read the paragraph on error conditions in the conformance
section in the *normative* part of the prose. No informative hand waving
required at all to support the thesis “Code that uses malformed,
nonconformant HTML is expecting nonstandard behaviorâ€.
You have been proven wrong; […]

I call shenanigans.
 
J

John G Harris

You don't know what you are talking about. Standard-wise it really doesn't
get any more current than "html4/" at the moment.

The W3C web site points to
<http://www.w3.org/TR/1999/REC-html401-19991224/>
Note the 401 in its name. You've picked a URI that foolishly points to
the HTML 4.01 document while having the appearance of pointing to the
HTML 4 document.

And now pray read its title.

Regardless of the title, the section is there "to facilitate
experimentation and interoperability between implementations of various
versions of HTML", not to advise on handling invalid markup in general.

Your conclusions as to what could have been meant are irrelevant. There
*is* a standard that makes recommendations what to do about invalid markup
which includes any notion of "malformed, nonconformant HTML".

But all it says about bad syntax is "Since user agents may vary in how
they handle error conditions, authors and users must not rely on
specific error recovery behavior", which is where this started.

You have
been proven wrong; stop whining and learn to live with it.

I say it was you who got it wrong.

John
 
T

Thomas 'PointedEars' Lahn

Garrett said:
No. I actually have paid attention to most discussions, but don't know
which one you are referring to now. Or why.

If I had to guess why, my guess would be that you want to try to prove
that you are correct about calling "nonsense".

OMG. Perhaps I am referring to the attribute discussions we frequently
have, all of which are related to standard properties?
You are setting up a straw man. Nobody said that XHR must not be used
for file.

Hardly. You were asserting that there was a standard to specify this as
you called using `file:' and expecting it to work relying on non-standard
behavior.
Assigning anything other than a domstring for the value in an assignment
to a property that is specified as domstring is unspecified.

That does not make it non-standard behavior. It is only non-standard if
there is a standard that says type conversion must not happen.
Expecting type conversion to occur on that value is expecting
nonstandard behavior.
No.


That is a false statement.

It isn't. You have some serious misconceptions about standards instead.
If a host object may result in something other than "object", it would
certainly make very little sense to expect the result to be "object".

Still it is _not_ "expecting non-standard behavior".
It is not hard to find an implementation where typeof document.images
!== "object". Many version of Safari are.

Non sequitur.
[snipped more fallacies]
Nonsense. The HTML standard makes recommendations as to how parsers are
supposed to handle invalid markup. But again, it is not wise to rely on
that as those are only recommendations.

Other than a few trivial suggestions, HTML 4 does not define how the
parser is supposed to handle invalid markup.

The recommendations are there, that's the point.
Regarding non-conformant behavior, HTML 4 makes some suggestions for
that, too, and in that very same appendix.
Exactly.

Expecting the browser to perform that error correction is expecting
nonstandard behavior.

No, evidently not.
Error-correction beyond those few suggestions is not standard.

No. It's the same misconception of yours again.
A program that requires non-standard behavior is at great
risk of failing.

Non sequitur.
You don't get it, do you? Working drafts are NOT to be cited as
reference material, as something else than "work in progress". They are
NOT standards. While that already follows from the W3C Process Document
and common sense, the very text of the Working Draft you are referring
to explicitly says so in its "Status of this Document" section.

The WD states that other protocols are implementation-dependent, [...]

This WD (or any other working draft) is *irrelevant* with regards to
standards!


PointedEars
 
T

Thomas 'PointedEars' Lahn

Eric said:
Somebody doesn’t know what she is talking about for sure.

She, Erica?
‘HTML 4’ is a W3C recommendation, the only relevant HTML *standard* is
ISO/IEC 15445:2000.

Of course not. W3C Recommendations (with capital R) are considered Web
standards.

<http://www.w3.org/Consortium/>
<http://www.w3.org/standards/>
<http://www.w3.org/standards/about.html>
<http://www.w3.org/standards/faq.html#std>
<http://en.wikipedia.org/wiki/Web_standards>
<http://webstandardsgroup.org/standards/>
<http://www.opera.com/company/education/curriculum/>
<http://www.zeldman.com/dwws/>
<http://www.alistapart.com/articles/grokwebstandards/>
<https://developer.mozilla.org/en/using_web_standards_in_your_web_pages>
<http://www.456bereastreet.com/lab/developing_with_web_standards/>

(Want more proof? Google is your friend. [psf 6.1])
And now pray read its title.

Now pray read the paragraph on error conditions in the conformance
section in the *normative* part of the prose. No informative hand waving
required at all to support the thesis “Code that uses malformed,
nonconformant HTML is expecting nonstandard behaviorâ€.
Nonsense.
You have been proven wrong; […]

I call shenanigans.

Parse error.


PointedEars
 
T

Thomas 'PointedEars' Lahn

John said:
The W3C web site points to
<http://www.w3.org/TR/1999/REC-html401-19991224/>
Note the 401 in its name. You've picked a URI that foolishly points to
the HTML 4.01 document while having the appearance of pointing to the
HTML 4 document.

AISB, you don't know what you are talking about. The "html4/" URI always
refers to the latest (most recent) version of HTML 4, which currently is
HTML 4.01. (That is, both mentioned URIs currently refer to the *same*
resource.)

,-<http://www.w3.org/TR/html4/>
|
| This version:
^^^^^^^^^^^^
| http://www.w3.org/TR/1999/REC-html401-19991224
| (plain text [794Kb], gzip'ed tar archive of HTML files [371Kb], a
| .zip archive of HTML files [405Kb], gzip'ed Postscript file [746Kb,
| 389 pages], gzip'ed PDF file [963Kb])
| Latest version of HTML 4.01:
| http://www.w3.org/TR/html401
| Latest version of HTML 4:
^^^^^^^^^^^^^^
| http://www.w3.org/TR/html4
| [...]

(Granted, the use of `latest' in English can be confusing. However, if you
had read more carefully ...)
[snipped more pointless babbling]


PointedEars
 
E

Eric Bednarz

Thomas 'PointedEars' Lahn said:
Eric Bednarz wrote:

Of course not. W3C Recommendations (with capital R)

You seem to have created a nice little RPG world there full of ad hoc
rules. Is downloadable content available?


So you read blogs and stuff. Good for you.
(Want more proof? Google is your friend. [psf 6.1])

Google is not my friend, as long as I have a choice, but thank you.

I suppose you mean I should use the results of random Google searches to
educate myself; while I appreciate this insight in your process of
knowledge acquisition, I’d rather stick to consulting relevant
resources.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
474,079
Messages
2,570,575
Members
47,207
Latest member
HelenaCani

Latest Threads

Top