Nearly The Whole Of The Internet Is NOT W3C Valid

A

Alberto

Eh unfortunately Google groups does not provide any longer a way to
reply to the group for older posts (though the one I am referring to is
not older than one month), and I happen to come back to this after life
has asked my attention elsewhere for a while :)
Yet I think your point deserves a reply.

You were referring to:
http://www.unitedscripters.com/spellbinder/internetexplorer.html

with the following observation:
In deference to Mike's request, I'll just say that the statement is
plain wrong.

The page incorrectly reports some sites as invalid, if offers no
analysis of site errors, what their cause or effect might be, nor does
it delve below the home page. It is at best superficial and no
meaningful conclusion can be drawn from it.

I personally lay no claim to be beyond critics, but I would like to be
criticized for the mistakes that I make, and not also for those that I
don't.

This is why I feel like stressing that each page thereby listed and
declared invalid is :

1) correctly (and not "uncorrectly") declared invalid, as long as by
correctness we imply that the page declared invalid is not declared
such by _me_ but by the w3c Validator itself. This is exactly the case,
in all that stunning list of such famous websites, all declared fully
invalid.

2) It can all be proved by merely CLICKING the links (which obviously
has not been done, as it can be evinced by the above mentioned
objection), which do NOT merely link to the site declared invalid, but
to the w3c validator adding to it as a query string the url of the site
delcared invalid. This would reproduce all the causes of the errors, of
course, which therefore are not neglected as you wrongly hint.

I hope you credit me with the impossibility, with hundreds of sites
linked there and declared invalid, to add per each of them also the
excruciating length of all the W3C validator erros that they report,
which in 90% of the cases span throughout the order of the various
hundreds, and this only in order not to be wrongly accused of having
not listed the errors I list letting to the W3C validator the task of
listing as invalid the sites I declare it lists as invalid.

Which are they?
All the following sites are W3C declared invalid:


http://validator.w3.org/check?uri=http://www.nist.gov/ Validate
National Institute Of Standards And Technology


http://validator.w3.org/check?uri=www.cnn.com Validate CNN




http://validator.w3.org/check?uri=www.looksmart.com Validate LookSmart





http://validator.w3.org/check?uri=www.yahoo.com Validate Yahoo



http://validator.w3.org/check?uri=www.google.com Validate Google



http://validator.w3.org/check?uri=www.lycos.com Validate Lycos



http://validator.w3.org/check?uri=www.netscape.com Validate Netscape



http://validator.w3.org/check?uri=www.excite.com Validate Excite



http://validator.w3.org/check?uri=www.altavista.com Validate Altavista



http://validator.w3.org/check?uri=www.tiscali.com Validate Tiscali



http://validator.w3.org/check?uri=www.aol.com Validate AOL



http://validator.w3.org/check?uri=http://www.vatican.va/phome_en.htm
Validate Vatican (Holy See)



http://validator.w3.org/check?uri=www.buddhanet.net/ Validate BuddhaNet



http://validator.w3.org/check?uri=http://philanthropy.com/ Validate
Philanthropy com



http://validator.w3.org/check?uri=http://www.nologo.org/newsite/home.php
Validate NoLogo org



http://validator.w3.org/check?uri=www.fao.org/ Validate Food and
Agriculture Organization of the United Nations


http://validator.w3.org/check?uri=www.uspacifistparty.org/ Validate Usa
Pacifist Party



http://validator.w3.org/check?uri=www.gop.com/ Validate Usa Republican
Party



http://validator.w3.org/check?uri=http://www.democrats.org/ Validate
Usa Democratic Party



http://validator.w3.org/check?uri=http://www.gp.org/ Validate Usa Green
Party



http://validator.w3.org/check?uri=http://www.wsws.org/ Validate World
Socialist Party



http://validator.w3.org/check?uri=www.wrox.com Validate Wrox publisher



http://validator.w3.org/check?uri=www.samspublishing.com Validate Sams
publisher



http://validator.w3.org/check?uri=www.mcgraw-hill.com Validate
McGraw-Hill publisher



http://validator.w3.org/check?uri=www.sybex.com Validate Sybex
publisher



http://validator.w3.org/check?uri=www.ziffdavis.com Validate ZiffDavis
publisher



http://validator.w3.org/check?uri=www.doverpublications.com Validate
Dover publisher



http://validator.w3.org/check?uri=www.apache.org Validate Apache



http://validator.w3.org/check?uri=http://www.allexperts.com Validate
AllExperts com



http://validator.w3.org/check?uri=http://answerpoint.ask.com Validate
Ask Jeeves



http://validator.w3.org/check?uri=http://www.nonags.com Validate Nonags



http://validator.w3.org/check?uri=http://www.tucows.com Validate Tucows



http://validator.w3.org/check?uri=http://www.pgpi.org/ Validate Pretty
Good Privacy org



http://validator.w3.org/check?uri=http://www.usr.com Validate Robotics



http://validator.w3.org/check?uri=http://www.intel.com Validate Intel



http://validator.w3.org/check?uri=http://www.logitech.com Validate
Logitech



http://validator.w3.org/check?uri=http://www.epson.com Validate Epson



http://validator.w3.org/check?uri=http://www.canon.com Validate Canon



http://validator.w3.org/check?uri=http://www.nikon.com Validate Nikon



http://validator.w3.org/check?uri=http://www.xml.com Validate XML com



http://validator.w3.org/check?uri=http://www.xml.org Validate XML org



http://validator.w3.org/check?uri=http://friendfinder.com Validate
Friend Finder



http://validator.w3.org/check?uri=http://www.mirc.com Validate Mirc com



http://validator.w3.org/check?uri=http://www.winzip.com Validate Winzip



http://validator.w3.org/check?uri=http://www.html.it Validate Html.it



http://validator.w3.org/check?uri=http://www.irfanview.com Validate
Irfan View



http://validator.w3.org/check?uri=http://www.adobe.com Validate Adobe



http://validator.w3.org/check?uri=http://www.aaas.org/ Validate
American Association for the Advancement of Science



http://validator.w3.org/check?uri=http://www.cancer.org/ Validate
American Cancer Society



http://validator.w3.org/check?uri=http://www.worldwildlife.org/
Validate WWF



http://validator.w3.org/check?uri=http://www.olympic.org/ Validate
Olympic Games org



http://validator.w3.org/check?uri=http://www.activist.ca/ Validate The
Activist Network



http://validator.w3.org/check?uri=http://www-cs.stanford.edu/ Validate
Stanford Computer Science Department



http://validator.w3.org/check?uri=http://www-cs-faculty.stanford.edu/~knuth/
Validate Donald Knuth's



http://validator.w3.org/check?uri=http://www.greenpeace.org/ Validate
Greenpeace



http://validator.w3.org/check?uri=www.nbc.com Validate NBC



http://validator.w3.org/check?uri=www.abc.com Validate ABC



http://validator.w3.org/check?uri=www.ebay.com Validate Ebay com



http://validator.w3.org/check?uri=www.pcworld.com Validate PC World



http://validator.w3.org/check?uri=www.cnet.com Validate CNET com



http://validator.w3.org/check?uri=www.lockergnome.com/ Validate
Lockergnome



http://validator.w3.org/check?uri=http://news.bbc.co.uk/ Validate BBC
uk



http://validator.w3.org/check?uri=http://www.chinaview.cn/ Validate
China View (Xinhuanet english)



http://validator.w3.org/check?uri=http://news.techwhack.com/ Validate
Tech
Whack India



http://validator.w3.org/check?uri=http://www.webuser.co.uk/ Validate
Web User uk



http://validator.w3.org/check?uri=www.informationweek.com Validate
Information Week



http://validator.w3.org/check?uri=www.macworld.com Validate Mac World



http://validator.w3.org/check?uri=www.linux.org Validate Linux org



http://validator.w3.org/check?uri=www.oracle.com/ Validate Oracle com



http://validator.w3.org/check?uri=www.motorola.com/ Validate Motorola
com



http://validator.w3.org/check?uri=www.softpedia.com/ Validate Softpedia
com



http://validator.w3.org/check?uri=www.betanews.com/ Validate Beta News
com



http://validator.w3.org/check?uri=http://blogcritics.org/ Validate Blog
Critics org



http://validator.w3.org/check?uri=www.geek.com/ Validate Geek com



http://validator.w3.org/check?uri=www.hp.com Validate Hewlett-Packard



http://validator.w3.org/check?uri=www.disney.go.com Validate Disney com



http://validator.w3.org/check?uri=www.ryanair.com Validate Ryan Air com



http://validator.w3.org/check?uri=www.historychannel.com Validate
History Channel



http://validator.w3.org/check?uri=www.webpronews.com Validate Web Pro
News



http://validator.w3.org/check?uri=www.monster.com Validate Monster com



http://validator.w3.org/check?uri=www.dice.com Validate Dice com



http://validator.w3.org/check?uri=www.blogger.com Validate Blogger com



http://validator.w3.org/check?uri=http://searchsecurity.techtarget.com
Validate
Search Secuirty com



http://validator.w3.org/check?uri=www.nationalgeographic.com Validate
National Geographic



http://validator.w3.org/check?uri=www.icann.org Validate Internet
Corporation For Assigned Names and Numbers



http://validator.w3.org/check?uri=www.nokia.com Validate Nokia



http://validator.w3.org/check?uri=http://www.usa.visa.com/?country=us
Validate Visa



http://validator.w3.org/check?uri=http://www.home.americanexpress.com/home/mt_personal.shtml
Validate American Express



http://validator.w3.org/check?uri=http://www.ain.cubaweb.cu Validate
Cuban Official Agency Of News



http://validator.w3.org/check?uri=/www.korea-dpr.com/ Official Homepage
- Democratic Peoples Republic of Korea



http://validator.w3.org/check?uri=www.symantec.com Validate Symantec


http://validator.w3.org/check?uri=www.redcross.org Validate Red Cross



http://validator.w3.org/check?uri=www.amnesty.org Validate Amnesty
International



http://validator.w3.org/check?uri=www.scientology.org Validate
Scientology



http://validator.w3.org/check?uri=www.un.org Validate United Nations



http://validator.w3.org/check?uri=www.unicef.org Validate United
Nations Children's Fund



http://validator.w3.org/check?uri=www.cia.gov Validate Central
Intelligence Agency



http://validator.w3.org/check?uri=www.fbi.gov Validate FBI


http://validator.w3.org/check?uri=www.eweek.com Validate Eweek



http://validator.w3.org/check?uri=www.match.com Validate Match com



http://validator.w3.org/check?uri=www.britannica.com Validate
Encyclopedia Britannica Online


http://validator.w3.org/check?uri=www.webreference.com Validate
Webreference



http://validator.w3.org/check?uri=www.napster.com Validate Napster



http://validator.w3.org/check?uri=www.foxmovies.com Validate 20th
Century FOX



http://validator.w3.org/check?uri=www.cosmopolitan.com Validate
Cosmopolitan



http://validator.w3.org/check?uri=www.php.net Validate PHP net



http://validator.w3.org/check?uri=www.opensource.org Validate Open
Source org



http://validator.w3.org/check?uri=www.macromedia.com Validate
Macromedia



http://validator.w3.org/check?uri=www.qualcomm.com Validate Qualcomm



http://validator.w3.org/check?uri=www.honda.com Validate Honda com



http://validator.w3.org/check?uri=www.mercedes-benz.de Validate
Mercedes Benz com



http://validator.w3.org/check?uri=www.house.gov Validate Usa Parliament
online



http://validator.w3.org/check?uri=www.assemblee-nationale.fr Validate
French Parliament online



http://validator.w3.org/check?uri=www.bundestag.de Validate German
Parliament online



http://validator.w3.org/check?uri=www.perl.com Validate PERL com



http://validator.w3.org/check?uri=www.python.org Validate Python org



http://validator.w3.org/check?uri=www.webmasterpoint.org Validate
Webmasterpoint org



http://validator.w3.org/check?uri=http://dynamicdrive.com/ Validate
Dynamic Drive



http://validator.w3.org/check?uri=www.penguin.co.uk Validate Penguin
Books



http://validator.w3.org/check?uri=www.shakespeare.com Validate
Shakespeare com


http://validator.w3.org/check?uri=http://web.mit.edu Validate MIT edu



http://validator.w3.org/check?uri=www.stanford.edu Validate
University Of Stanford



http://validator.w3.org/check?uri=www.harvard.edu Validate
University Of Harvard


http://validator.w3.org/check?uri=www.berkeley.edu Validate University
Of Berkeley


http://validator.w3.org/check?uri=www.evolt.org Validate Evolt



http://validator.w3.org/check?uri=www.useit.com Validate Jakob
Nielsen's Useit dot com


http://validator.w3.org/check?uri=www.sciam.com Validate Scientific
American



http://validator.w3.org/check?uri=www.nytimes.com Validate New York
Times



http://validator.w3.org/check?uri=www.reuters.com Validate Reuters



http://validator.w3.org/check?uri=www.guardian.co.uk Validate Guardian
co uk



http://validator.w3.org/check?uri=www.forbes.com Validate Forbes



http://validator.w3.org/check?uri=www.arabnews.com Validate Arab News



http://validator.w3.org/check?uri=www.telegraph.co.uk Validate
Telegraph UK



http://validator.w3.org/check?uri=www.businessweek.com Validate
BusinessWeek



http://validator.w3.org/check?uri=www.britishairways.com Validate
British Airways



http://validator.w3.org/check?uri=www.lufthansa.com Validate Lufthansa
Airways



http://validator.w3.org/check?uri=www.ti.com Validate Texas Instruments



http://validator.w3.org/check?uri=www.amazon.com Validate Amazon



http://validator.w3.org/check?uri=www.apple.com Validate Apple



http://validator.w3.org/check?uri=www.playboy.com Validate Playboy



http://validator.w3.org/check?uri=www.lemonde.fr Validate Le Monde
france



http://validator.w3.org/check?uri=www.lefigaro.fr Validate Le Figaro
france



http://validator.w3.org/check?uri=www.spiegel.de Validate Der Spiegel
germany



http://validator.w3.org/check?uri=www.repubblica.it Validate La
Repubblica italy



http://validator.w3.org/check?uri=www.iht.com Validate Herald Tribune
International



http://validator.w3.org/check?uri=www.thetimes.co.uk Validate The Times



http://validator.w3.org/check?uri=www.newsweek.com Validate Newsweek



http://validator.w3.org/check?uri=www.time.com Validate Time



http://validator.w3.org/check?uri=www.samsung.com Validate Samsung



http://validator.w3.org/check?uri=www.mozref.com/reference/objects/Selection
Validate A Random Mozilla Documentation



http://validator.w3.org/check?uri=www.sun.com Validate Sun Microsystem
 
A

Alberto

Of course, the list has been made a few months ago, precisely on
September 10 to September 12 2005.

If by chance a site is reported as valid, please do not infer by that
that my thesis is a fantasy: just try more then once, if that would be
(which I doubht) the case. I am NOT inventing, and it can be proved.

As per the objection that we should perform a "deeper" analysys namely
to check the validity not only of the front page (or home page, however
you may call it) but also the internal links, this is a task that I
have no fears to declare beyond my forces. I do not (NOT) deny its
utility, but I cannot with sites of that scope articulated in thousands
of pages each, make an analysys also upon each of its links. Some opf
those sites are yahoo and Google: what do you want me to do, twelve
billions of checks?

If somebody wants to undertake it, it shall be welcomed by me.

But I would like not to be objected with positions that speculate about
the possible validity of UNCHECKED internal links, with the idea this
could erase the no longer speculative bat actual fact that the home
pages are invalid. The latter is a FACT. The former has not been proved
yet, not even by those who wish it would be that the case, and sposnor
such wishful hypotehsis as an evidence.

Eventually, it appears inconsistent or delusional the idea that
hypothetical (forgive misspellings, english's not my native lang)
successful validations somewhere deeper in the websites could make up
for the invalidation of the home page.

Once we are for Validation, either we are for validation or we are not
for it. We cannot be for half of a Validation, or for a validation that
applies to scattered hypothetical internal links, but not to the most
apparent element of whatever site: its front page.

Rather and besides, it happens the contrary, as it may be proved if you
check the website of mc afee com (the anti virus): its FRONT page
validates: its internal links don't. So while we speculate about valid
internal links that we can't find, we find at least sure evidence of
the opposite.
The same applies to Mozilla's documentations: front page validates,
internal links at times don't.

Facts still stay on my side, which I do not take any particular pride
in: I am only trying to make apparent the REALITY, and not the
imaginary, nature of a fact that is out there.
The W3C validator is virtually declaring as invalid the whole of the
net.

This is the absurdity we are dealing with.
I do not even care why or for which errors it declares them invalid:
invalid is Invalid, and I do not allege that the criterion that depicts
what is invalid and what is not, should be some other citerion than
that which the W3C uses to come out with its result: Invalid.

If it declares it invalid, it is invalid by the W3C and this is what we
are talking about.
We can't, in the name of compliance with the W3C, discriminate between
errors that the W3C correctly reports as such, and errors that should
be no longer considered such although the W3C says they are such,
Such positions are not defensible preserving intellectual honesty. We
can't apply two different rules to the W3C, the one that sponsors
compliance, and uphold it, and the one that condones the lack of it,
declaring the latter is still one more evidence in favour of the
compliance that is not there.

I have no magic formula neither I am the ultimate detainee of
intellectual honesty: but you can prove to me that you have it too only
by defending the W3C, if you want to do so, by sticking to what the W3C
says. Invalid means Invalid.
I am one of those who sponsor that this deosn't matter, that compliance
with the W3C parser is UTTERLY meaningless. That the W3C parser is a
hopeless tool, that accuses of invalidity for the silliest rewasons
even the sites where have contribuitedthe best engineers available on
planet earth, and whose unique outcome is of declaring invalid sites
that have been browsed and used by scores of millions of different
browsing platforms since 1990, and that have all prospered and even
reached the top Nsadaq quotiations despite being invalid by the
hundreds of errors.

But if you sponsor the opposite position, which legitimacy I do NOT
contend, you can NOT sponsor it applying to the W3C the rules that,
although contradictory, are designed to make it come out as being right
also when it declares a site being wrong.
Invalid by the w3c rules means invalid by the only rules we are talking
about and that the w3c parser uses to relinquish that invalidation,
period.
 
R

RobG

Alberto said:
Eh unfortunately Google groups does not provide any longer a way to
reply to the group for older posts (though the one I am referring to is
not older than one month), and I happen to come back to this after life
has asked my attention elsewhere for a while :)
Yet I think your point deserves a reply.

You were referring to:
http://www.unitedscripters.com/spellbinder/internetexplorer.html

with the following observation:

That appears to be me, so I'll reply.

The original post subject was "Browser inconsistencies: what is the most
efficient development regime?" and is accessible here:

I personally lay no claim to be beyond critics, but I would like to be
criticized for the mistakes that I make, and not also for those that I
don't.

Firstly, lets put the statement back into context. It was in response
to this:

(e-mail address removed) wrote:
[...]
Please as for the story about W3C compliance, have a look at:
http://www.unitedscripters.com/spellbinder/internetexplorer.html
scroll till HALF way of that file, you do NOT have to read it. Just
locate the middle of the page where there is a list of over 200 sites
from Google to Intel, from Yahoo to Logitech, from Amazon to
McGrowHill, from alpha to omega, that do NOT pass the W3C test by
several hundreds of errors each.
That is where the true importance of full compliance with W3C
guidelines stays.


My statement was an observation, is is still accurate. It was not a
criticism of the site, but of the comments mad by 'vall' who used the
page to draw conclusions about whether some sites were more closely
aligned to W3C standards than others.

At the time the page reported Mozilla.org as being invalid, whereas in
fact it wasn't. I note you have changed the link to the Mozilla
documentation site, which doesn't validate.

How about changing the Microsoft link to MSDN?

<URL:http://validator.w3.org/check?uri=http://msdn.microsoft.com/default.aspx>

It is invalid even as HTML 4 transitional, with far more errors than the
Mozilla Object Reference site's HTML 4 strict.

This is why I feel like stressing that each page thereby listed and
declared invalid is :

1) correctly (and not "uncorrectly") declared invalid, as long as by
"uncorrectly"?


correctness we imply that the page declared invalid is not declared
such by _me_ but by the w3c Validator itself. This is exactly the case,
in all that stunning list of such famous websites, all declared fully
invalid.

I made no comment at all about wheter you personally were responsible
for reporting whether sites are valid or not. My exact words were "The
page incorrectly reports...".

The page incorrectly reported Mozilla as invalid, there may have been
others.

2) It can all be proved by merely CLICKING the links (which obviously
has not been done, as it can be evinced by the above mentioned
objection), which do NOT merely link to the site declared invalid, but
to the w3c validator adding to it as a query string the url of the site
delcared invalid. This would reproduce all the causes of the errors, of
course, which therefore are not neglected as you wrongly hint.

That was exactly what I did. My 'hint' is that there was no attempt to
analyse the results of validation of individual sites. That is not a
criticism but an observation.

The criticism is for those who attempt to draw conclusions from the
results without analysing the errors of individual sites. A site with a
single trivial error is treated the same as one that might be so bad it
doesn't even render.

Does the fact that Microsoft's documentation validates with more errors
than Mozilla's make it better or worse? Apple.com is 'invalid' with but
3 trivial errors. Is that more or less compliant than Telegraph.co.uk
with 270 errors?

I hope you credit me with the impossibility, with hundreds of sites
linked there and declared invalid, to add per each of them also the
excruciating length of all the W3C validator erros that they report,
which in 90% of the cases span throughout the order of the various
hundreds, and this only in order not to be wrongly accused of having
not listed the errors I list letting to the W3C validator the task of
listing as invalid the sites I declare it lists as invalid.

Yes, that's exactly what you need to do if you want the results of your
validation exercise to have any meaning. A good start would be to
classify sites based on the number of errors, then look at the types of
errors, then select some sample sites and thoroughly analyse the errors.

You might even realise that the validator reports many things as errors
which in fact aren't.

Seems like a lot of work? Yes, of course, but it is absolutely
necessary if you really want an analysis of the level of compliance with
web standards to be taken seriously.

[...]
 
A

Alberto

But Rob, this is not something between me and you. I understand I
started the thread taking as an occasion your reply, but the scope we
are dealing with, and which merely springs from that occasion, spans
beyond it by such a degree and in so obvious a manner, that in no way
you should misread it as something between you and me.

I also understand that you may think terms like "uncorrectly" are in a
non formally correct spelling, but you also have to realize that here
on these groups, which are open to an international audience and that
so many persons from all around the globe read, it is just impossible
that we are all native english speaker. I am not in fact, so you should
be indulgent with my possible mistakes, and you shouldn't use my
possible grammatical mistakes as an argument to gain a point in a cause
that cannot be won anyway.

I am not here to invent a thesis. Like yourself, I am too a person who
is adult and who can produce an intellectual effort in order to stress
the importance of something that is corroborated by facts, not
uncorroborated. As said I do not mean I can be beyond critics, but I
find it intellectually impossible to dismiss a list like the one I
provide upon the bases you attempt to propose.

Mozilla validated, true: yet, as they can surely confirm to you if they
are honest, when I did the list on the nefarious day sept 11 2005, it
didn't. Now, in such long a list, if all we are left with is the
desperate search of a link incorrectly listed as invalidated so to
uphold it as an alleged disqualification of the whole list, we have
been left with very little, and we are clearly scrambling for a
desperate line of defence at the bottom of the barrel. If we are left
with that, we are actually proving the strenght of the list, NOT its
weakness.

We have a list that not only is long, but that by its very same length
could have been longer.
Most importantly, the names listed in such list are stunning ones: they
are not a part of the internet, they are the internet.

Now, relieved of any burden of proof as you deem yourself while, yet,
at the same time you perceive yourself as having a stake in this cause,
you shift all the burden of the proof on me, apparently claiming that I
sholuld be endowed with titanic shoulders and that I, who have already
provided a proof, should none the less provide even more which you
yopurself acknowledge as nearly impossible - yet at the same time you
contend in this cause, and you feel like you yourself, although
contending, have to provide NO proof while you attempt to disqualify
the work of the others sitting on such maginficently convenient
position.

You have there that list. You can group it yourself by errors. Do you
know what a good work that list is? So good that in order to have what
you ask for, you just have to click the links and you shall have
instantly all the numbers you covet:

Read them, I do this on your behalf though to you all it would have
required is to move your finger and click:

yahoo 281 errors
AOL 277 errors
altavista 38 errors
excite 235
Netscape 101
Lycos 170
Google 51
NBC 317
Ebay 222
Monster com 256

Even just before these NAMES and these NUMBERS, you should feel you
can't defend the position any longer ALREADY.
Even at such early stage, because you can NOLt dismiss with any
CREDIBILTY the ENORMITY that is simply in those few lines.

The W3C validator is utterly meaningless, and the sooner we quit
sponsoring it as a tool worth being listened to, the better, because it
is CAPABLE of relinquishing RESULTS LIKE THOSE, which are clearly
crazy.

Now, the contents of these sites change daily. I am sure now you would
suggest I should first group, and then update daily these groups, or
even better more times on a daily basis since these sites are all
updated more than once daily. The funny side is that you appear serious
when you attempt to claim this as a valid objection and that I INDEED
should have been required to do it or you would just ignore what those
NUMBERS and NAMES spell so clearly :)

So, we have also results like:

tucows 56 errors
Intel 42
Epson 12
Canon 8
American Association for the Advancement of Science 127
Linux 22
Motorola 96
web pro news 299
Visa 23

Your line of reasoning is that since some sites have more errors and
some less, this to some degree should invalidate the list. But the list
is valid insofar as those sites are invalid. And how many errors are
enough to be invalid, since for the W3C ONE is enough?

And if less errors equals to something that is more compliant, the
outcome that says that Canon is more compliant than Yahoo and Excite,
is a point that does NOT support your position: it supports the
CONTRARY. Because when we speak of the internet, we speak much more the
declension of Excite and Yahoo than that of Canon and Epson. So what
matters if some sites have less errors, when the reasoning ends up in a
short circuiting ANYWAY?

And however we can't be more presidential than the president. If the
w3c says that an error is enough to disqualify the validity of a page,
you cannot correct the sentence the w3c gives giving a different
sentence and declaring in the same line that this latter, which denies
the former, defends the former.

You are on grounds you can't defend Rob.

And you attempt to cope with this by chicanery. Now, I was born in the
country of Machiavelli so I do not consider chicanery an irrelevant or
despicable art. But we have an objective problem Rob, a W3C that
disqualifies as invalid nearly the whole of the internet, and if you
think that by desperately attempting to deny this fact scrambling for
marginal advantages may work for the sake or in the interest of the
W3C, keep in mind that it won't.

There isNO point in upholding a position that clearly cannot be
defended before such list and the numbers it yields and the names it
involves, and the best defence that I sponsor for the W3C and which you
should sposor too is that the W3C should CHANGE its own approach to
what valiadtion is, and to what should be considered as valid and
invalid.

The W3C is not God. But you deal with it as if it were.

The W3C is wrong. That list proves it. We cannot declare invalid the
whole of the world after rules we ourselves made and clearly nearly
everybody violates, and evryobdies that MATTER, and at the same time
say that we are right. It's a tautology that has to be corrected.
The W3C validation, as it is now, makes NO sense.

And all the chicanery of the world can't change this fact, which that
liost proves beyond any doubt, beyond any reasonable doubt, befeore
whateve court of unbiased men and women.

ciao, and btw it has been a pleasure to talk with you - but of course
if the defense of the indefendible must resort to these arguments,
there is no longer any point in debating. If you have personal reasons,
which I do n ot contend, by which don't want or can't afford to see W3C
validation is a CLEAR mess deprived of any meaning as it CURRENTLY is,
there is neither really anything I can say, nor any possible evidence
that can be brought forth that shall be able to make you change your
mind in the LEAST, no matter how sound the reasonings are, no matter
how evident the evidence is. Because even in the most perfect work you
can find a flaw, so all the more in the obscure work of an obscure man
like myself who just rtries to make blatant what is patent already: the
W3C validation, as it is NOW, makes JUST NO SENSE whatever.


Alberto
http://www.unitedscripters.com/spellbinder/internetexplorer.html
 
R

RobG

Alberto said:
But Rob, this is not something between me and you. I understand I
started the thread taking as an occasion your reply, but the scope we
are dealing with, and which merely springs from that occasion, spans
beyond it by such a degree and in so obvious a manner, that in no way
you should misread it as something between you and me.

I also understand that you may think terms like "uncorrectly" are in a
non formally correct spelling, but you also have to realize that here
on these groups, which are open to an international audience and that
so many persons from all around the globe read, it is just impossible
that we are all native english speaker. I am not in fact, so you should
be indulgent with my possible mistakes, and you shouldn't use my
possible grammatical mistakes as an argument to gain a point in a cause
that cannot be won anyway.

That was not my intent, I thought you were attributing its use to me -
clearly a misunderstanding.

I am not here to invent a thesis. Like yourself, I am too a person who
is adult and who can produce an intellectual effort in order to stress
the importance of something that is corroborated by facts, not
uncorroborated. As said I do not mean I can be beyond critics, but I
find it intellectually impossible to dismiss a list like the one I
provide upon the bases you attempt to propose.

What I criticise is that conclusions are drawn from the presentation
of the list that are not supported by a more rigorous analysis.

Mozilla validated, true: yet, as they can surely confirm to you if they
are honest, when I did the list on the nefarious day sept 11 2005, it
didn't. Now, in such long a list, if all we are left with is the
desperate search of a link incorrectly listed as invalidated so to
uphold it as an alleged disqualification of the whole list, we have
been left with very little, and we are clearly scrambling for a
desperate line of defence at the bottom of the barrel. If we are left
with that, we are actually proving the strenght of the list, NOT its
weakness.

We have a list that not only is long, but that by its very same length
could have been longer.
Most importantly, the names listed in such list are stunning ones: they
are not a part of the internet, they are the internet.

They are web sites, not 'the internet'.

Can you reasonably criticise Apple for having 3 trivial errors? Or
even MSDN their 70 or so and from that draw the conclusion that
standards compliance doesn't matter?

Does the fact that perhaps 95% (complete guess) of markup *is*
compliant count for nothing?

Now, relieved of any burden of proof as you deem yourself while,

The burden of proof is yours, not mine. You are proposing a theory, I
am saying your 'proof' is insufficient.

[...]
The W3C is not God. But you deal with it as if it were.

Not at all, not ever. I have only ever used 'W3C' in the context of
their standards, I don't think I have ever commented on their
competence or omnipotence.

The W3C is wrong. That list proves it.

It proves nothing of the sort! Are the police wrong because crime
still occurs? Should all laws be thrown out because everyday people
break laws every single day?

Have you never, *ever* broken any law? No matter how trivial?

That is the standard you would hold W3C standards too. Incidentally,
there are a number of fundamental internet and web standards that are
not controlled by the W3C, ECMAScript being an example.

We cannot declare invalid the
whole of the world after rules we ourselves made and clearly nearly
everybody violates, and evryobdies that MATTER, and at the same time
say that we are right. It's a tautology that has to be corrected.
The W3C validation, as it is now, makes NO sense.

And all the chicanery of the world can't change this fact, which that
liost proves beyond any doubt, beyond any reasonable doubt, befeore
whateve court of unbiased men and women.

Your basic premise is that complaining about invalid sites is
pointless because most sites are invalid. I can accept that as a
point of view.

In order to argue the point, I would:

1. Define what 'standards compliant' means. Are we talking just HTML?
Or are CSS, DOM, ECMAScript included? Most readers of this forum
would include at least those when discussing the standards compliance
of hosted pages. Let's restrict ourselves to HTML as defined by the W3C.


2. Establish a framework for determining the relevance of
non-compliances. For example, the use of a single deprecated tag in a
page of several hundred tags should be treated as trivial, whereas
forgetting mandatory closing tags or incorrect nesting of block
elements inside inline elements is much more serious.

Then you respond to individual complaints of 'Oh, Google isn't
compliant because of ...' by indicating whether it really matters or
not.

Of course purists will never be satisfied with anything less than 100%
compliance (that goes beyond simple DTD validation), but the vast
majority of surfers will be happy with 'fit for purpose' compliance.


3. Determine a good compliance methodology. The W3C validator only
checks against a DTD and does not correctly report some markup (e.g.
HTML inside script document.write statements). This results in
spurious reporting of errors - validator results *must* be analysed
further.


4. Having regard to the above, determine the consequences of
compliance/non-compliance for various classes of common errors.

The usefulness of standards compliance can only be evaluated against
its alternatives - the consequences of not being compliant. In the
extreme, without standards the web would not exist, so we are only
discussing what level of compliance is reasonable.

ciao, and btw it has been a pleasure to talk with you

Likewise, cheers.


[...]
 
V

VK

Nearly The Whole Of The Internet Is NOT W3C Valid

Firstly of all I'd like to state that there must be some commonly
accepted standards, otherwise any development becomes unreliable and
very expensive task.

Secondly W3C is a standardization unit that was *originally* supposed
to accept/decline open technologies descriptions from development
companies. If accepted it supposed to describe such technologies in the
way that any one else could reproduce it in fully compatible way.

Thusly W3C is not an eastern style dictatorship and we are not their
slaves. It supposed to be a feedback-based process. But after the
Browser Wars W3C has been put for years in rather abnormal situation:
From one site the dominant browser producer did not give any respect to
W3C so there was no use to address to him.
From the other side Mozilla followed each and every order from W3C
because it was the only way to keep vitally important that time W3C
endorsement.

Someone said that the absolute power spoils absolutely...

As a sample of how things were going and how they *may* eventually
change consider this code:
....
if (NN6) {
var range = document.createRange();
var l = document.getElementById('aLayer');
while (l.hasChildNodes())
l.removeChild(l.firstChild);
range.setStartAfter(l);
var docFrag = range.createContextualFragment(html)
l.appendChild(docFrag);
}
....

If you are wondering what do these LSD-inspired revelations over DOM
mean: this is how you supposed to emulate innerHTML in first releases
of Netscape 6
That would be not so bad: anyone can make dumb things of the first
attempt. The real issue was that W3C had positioned the above as an
*advantage* of the proper DOM model usage over amateur and incorrect
Microsoft way with their terrible innerHTML method.

The reaction of developers was furious. That was the first public
revolt against W3C standards (where I took my humble participation
too). W3C and Netscape were so terrorized by hate letters that already
in the 3rd patch to NN6 (May 2000) they implemented Microsoft's
innerHTML which everyone's enjoying now. As a small "revenge
compensation" they refused to accept innerText though, but people
managed to leave without it.

Coming back to our days: why say Google front page is not W3C
compliant? Look at the code and you'll see that all scripting/layout is
adapted to be as compact as possible. If you're getting millions of
millions of requests every day then each byte counts. And adding
monstrous doctype declaration is not acceptable.

W3C already lost this battle: if a standard is not implemented by major
producers in a few years, it will never be implemented. How W3C could
save the situation and its face? By say changing this apogee of the
bureaucratic thinking:
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" ...url>
to something like:
<!001>
there the number would refer to a relevant W3C DTD table. Computer
doesn't mind, it's even easier to it to parse one digit rather than a
string. And no excuse then to not include this a bit of a code to your
site no matter how loaded would it is.

But for this W3C needs to be able to agree that *sometimes* they can go
wrong and that *sometimes* developers and browser producers know better
what is better.
This ability was totally lost over past years though and the
restoration process may be long and painful.
 
R

Robert

Alberto said:
yahoo 281 errors
AOL 277 errors
altavista 38 errors
excite 235
Netscape 101
Lycos 170
Google 51
NBC 317
Ebay 222
Monster com 256

The W3C validator is utterly meaningless, and the sooner we quit
sponsoring it as a tool worth being listened to, the better, because it
is CAPABLE of relinquishing RESULTS LIKE THOSE, which are clearly
crazy.

I am not sure what point you are trying to make. If you are saying that
W3C (HTML) validation is meaningless, because so many website are
invalid, then I don't understand how you would come to such a conclusion.

I can explain to you the benefits and needs for it, but there are so
many reasons that I don't know where to begin.
 
M

Michael Winter

On 16/11/2005 05:50, Alberto wrote:

Not that I hold /any/ sway as to who posts what in this group (nor would
I want to), but it is a shame that you decided to ignore a very
reasonable request to not discuss a provocative subject that is
off-topic in this group. Particularly when you bring no significant
position or argument to something that has been debated numerous times
in more appropriate groups.

Just to make this point clear now, this will be my /only/ post to this
thread, unless it moves to matters of a more suitable nature. I have no
intention of getting committed to potential flame wars like this. It's
happened far too often in the past.

[snip]

You still haven't corrected the technical error I pointed out in the
previous thread. I'd also like to emphasise that I doubt it is the only
mistake on your part. However, I'm not going to wade through such a
long-winded article to discover others.

[snip]
I hope you credit me with the impossibility, with hundreds of sites
linked there and declared invalid, to add per each of them also the
excruciating length of all the W3C validator erros that they report,
[...]

If your article aims to make a point, and I assume it does, then you
should be willing to comment on the extent and impact of any errors.

Validation should not be a goal in itself. If it is possible to create a
document that's valid and, even better, compliant with all relevant
standards, then effort should be made to achieve that end. Incidentally,
this should be the case for the vast majority. However, there are
reasons to intentionally write invalid markup: laziness or incompetence
do not qualify.

Without inspecting each site, I couldn't comment whether the reported
errors are intentional and with good reason, or just a result of poor
practice (though I could guess). Anyway, that would be your
responsibility, not mine.

[snip]

Mike


Please learn how to post properly. Interleave comments with quoted
material, and trim the irrelevant (preferably indicating that action).
Read the group FAQ, particularly 2.3 and its links.

I hope Randy doesn't mind me borrowing this:

If you want to post a followup via groups.google.com, don't use the
"Reply" link at the bottom of the article. Click on "show options" at
the top of the article, then click on the "Reply" at the bottom of the
article headers.

[Follow-ups set to poster]
 
T

Thomas 'PointedEars' Lahn

VK wrote:

[Quotation corrected]
Firstly of all I'd like to state that there must be some commonly
accepted standards, otherwise any development becomes unreliable and
very expensive task.
Indeed.

Secondly W3C is a standardization unit that was *originally* supposed
to accept/decline open technologies descriptions from development
companies.

It was and is not. said:
If accepted it supposed to describe such technologies in the
way that any one else could reproduce it in fully compatible way.

Thusly W3C is not an eastern style dictatorship and we are not their
slaves. It supposed to be a feedback-based process. But after the
Browser Wars W3C has been put for years in rather abnormal situation:
From one site the dominant browser producer did not give any respect
to W3C so there was no use to address to him.

True. Let's name it: Microsoft Corp. providing Internet Explorer.
From the other side Mozilla followed each and every order from W3C
because it was the only way to keep vitally important that time W3C
endorsement. [...]

Utter nonsense. There are and have been no orders from W3C (which is not
the sole body you present it to be[1]) and there was/is no support from
W3C to Netscape/AOLTW or the late Mozilla Organization. The target of
Mozilla/5.0 was and is to create a user agent that tries to follow Web
standards in order to set an example on how an Open Source project and
Web standards help to increase interoperability with all its other
benefits.

<http://www.mozilla.org/about/>

Let's leave the rest of your misconceptions to /dev/null, shall we?


PointedEars
___________
[1] <http://www.w3.org/Consortium/Member/List>
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,982
Messages
2,570,186
Members
46,739
Latest member
Clint8040

Latest Threads

Top