Javascript on the client as an alternative to Perl/PHP/Python on theserver

D

Dan Rumney

Hi all,

I've been writing Javascript for quite a while now and have, of late,
been writing quite a lot of AJAX and AJAX-related code.

In the main, my dynamically generated pages are created using Perl on
the backend, with Javascript providing limited frontend functionality.
As an example, an expanding tree would be fully populated on the
server-side and then presented to the browser, with Javascript and CSS
being used to vary the visibility of elements of the tree as required.

The critical point is that the page is "pre-built" on the server.

I've been thinking about an alternative approach, whereby the page is
built on the fly with various AJAX calls to the server to pull in the
necessary components. In the extreme, I could visualize doing away
with Perl generated pages entirely. All pages are HTML, with AJAX
calls to the server. The responding scripts would return JSON or XML
data which would be interpreted on the client side as required.

An advantage to this would be that it would be a lot easier to
generate the pages using simple HTML editors. It would be a lot
simpler to ensure validity of the HTML (as the final product would
always be available to me).

I understand that such an approach would mean that non-JS enabled
browsers would not be able to access the pages I create, but I'm not
concerned about that (my audience is internal to my company, so I can
stipulate browser requirements).

I'm interested in people's comments on this approach. Does it provide
extra burden on the server? Are there any hidden advantages or
disadvantages I may be aware of? Does anyone know of any white papers
on this approach?

Many thanks,

Dan
 
P

Peter Michaux

In the main, my dynamically generated pages are created using Perl on
the backend, with Javascript providing limited frontend functionality.
As an example, an expanding tree would be fully populated on the
server-side and then presented to the browser, with Javascript and CSS
being used to vary the visibility of elements of the tree as required.

The critical point is that the page is "pre-built" on the server.

This is a good approach, the only viable approach, if you want your
pages on the general purpose web where old browsers and/or disabled
users are visiting the site. A fully functional HTML page is the only
way to start in this environment where the JavaScript merely enhances
the user experience. JavaScript can feel a bit like fluff or icing on
the cake in this situation. It can also add the necessary wow factor
for the marketing department. This is the most expensive form of
development for a front end as it needs to work under many sets of
circumstances.
I've been thinking about an alternative approach, whereby the page is
built on the fly with various AJAX calls to the server to pull in the
necessary components. In the extreme, I could visualize doing away
with Perl generated pages entirely. All pages are HTML, with AJAX
calls to the server. The responding scripts would return JSON or XML
data which would be interpreted on the client side as required.

Over the past while one of my work projects has amounted to an HTML
page that essentially looks like this:

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"
"http://www.w3.org/TR/html4/loose.dtd">
<html>
<head>
<title>one-page client-side app</title>
<script src="library.js" type="text/javascript"></script>
<script src="app.js" type="text/javascript"></script>
</head>
<body>
</body>
</html>

The app.js file loads all the necessary data using Ajax and JSON. The
body is dynamically built based on this data.
An advantage to this would be that it would be a lot easier to
generate the pages using simple HTML editors.

I don't understand why it would be easier or what "simple HTML
editors" are and how they relate here. Generating pages server-side is
not a particular burden and actually there are more to read and
frameworks for doing page generation on the server-side.
It would be a lot
simpler to ensure validity of the HTML (as the final product would
always be available to me).

I don't understand this either. It should be easy to validate the HTML
either way.
I understand that such an approach would mean that non-JS enabled
browsers would not be able to access the pages I create, but I'm not
concerned about that (my audience is internal to my company, so I can
stipulate browser requirements).

Make sure the company will never hire a disabled user that requires a
browser that does not support JavaScript.
I'm interested in people's comments on this approach.

It works.
Does it provide
extra burden on the server?

Possibly less. It depends on how much code the server has to send to
the client so the client can generate the page. It also depends how
many times the page needs to be redrawn/refreshed without changing the
data. In my case the page is redrawn many times based on about 50k of
relatively constant data. By doing it all client-side it saves the
server going into the cache or db many times. It saves downloading
this 50k many times also.
Are there any hidden advantages or
disadvantages I may be aware of?

I think the main ones are accessibility and that you will likely deal
with more browser bugs because the client is doing more. You will be
managing quite a bit of data on the client-side. Some sort of MVC
architecture in the JavaScript may help.
Does anyone know of any white papers
on this approach?

What exactly would a "white paper" for a topic like this? People toss
this phrase around all the time. I use Google, read blogs and
comp.lang.javascript, of course.

Peter
 
D

david.karr

I've been thinking about an alternative approach, whereby the page is
built on the fly with various AJAX calls to the server to pull in the
necessary components. In the extreme, I could visualize doing away
with Perl generated pages entirely. All pages are HTML, with AJAX
calls to the server. The responding scripts would return JSON or XML
data which would be interpreted on the client side as required.

Technically, I'm still a lurker in this domain, as I've yet to get
into a real Javascript project, but I think this approach has a lot of
potential (and I've been thinking a lot about this recently).

However, I would still approach this pragmatically. Respecting the
fact that you're probably more Perl-oriented, it's worthwhile to still
utilize some well-built frameworks as components of this, but in a
loosely coupled fashion (for instance, I would never use JSP custom
tags that encapsulate old versions of Dojo or YUI components). Using
YUI as the client framework and Struts2 to implement services for the
client would be a good combination. Adding DWR to this mix could be
useful also.

However, don't ignore the possibility of still doing some server-side
code generation. There may be situations where server-side generation
of the initial snapshot of data would be useful.

As other posters have pointed out, you have to pay attention to your
requirements. If accessibility is an issue, you'll have to think
carefully about that.
 
D

Dan Rumney

-- cut --

Thanks Peter,

You raise some good points there.

Some of them are less relevant to me, but useful for the wider
readership of this forum.

Any solution that is dependent on Javascript for content generation
and manipulation will cause problems to those with old browser, text-
only browser, or those using screen readers. I wholly acknowledge this
fact, but I'm not going to dwell on it further as I feel that this has
probably been touched on by various other posters.

By simple HTML editors, I mean things like Notepad, Crimson Editor,
HTML-Kit and the like. I'm not a big fan of Dreamweaver and other
'visual' editors, but I'd be the first to admit that I probably need
to get over that bias. I like to be able to understand the link
between what I'm seeing and the HTML that's being created.

I'd also be the first to admit that I'm not abreast of all the
framework choices that are out there.

The way I like to develop dynamic pages is to create a mockup of how
I'd like the final page to look , using purely static HTML (and making
up some arbitrary values for the dynamic bit).
The method I outlined in my original post makes the transition from
static to dynamic a lot simpler.

I think your comments about browser bugs is a *very* useful one.
Server generated HTML will never be exposed to that. So long as it
generates good HTML, it will produce a sensible page. Writing
Javascript that is cross-browser compliant is a major pain (this I
know!)

Not to diminish your comments, but I think I can summarise them as:
1) Javascript page building results in pages that are not widely
accessible
2) Javascript page building is prone to web bugs

I think it's important to all readers to be aware of 1) and understand
its relevance to their solution (and, most of all, not underestimate
that importance)
I think that 2) is a good point, but can be solved with an appropriate
client-side framework.

Which brings us back to one of your first points, albeit an implicit
point: find the right server-side framework.

Great food for thought; thanks!

For the record: a white paper is broadly understood to be a relatively
short treatise or report aimed to educate people on a certain point or
to present a solution to an industry problem. The phrase may be tossed
around widely, but I think I was using it in its appropriate context
here.
 
D

Dan Rumney

-- cut --

Some more good points.

I need to learn more about YUI, Struts2 and DWR, clearly.

However, I was particularly struck by your suggestion of using server-
side code for the initial snapshot and, if I may infer your meaning,
using Javascript to update the snapshot as time goes on.

I agree, in principle, however, I'd like to present a counterpoint.

The framework above requires:

(SS: Server-side
CS: Client-side
)

1) SS to generate initial page
2) CS to present initial page
3) CS to handle ongoing requests
4) SS to generate responses to requests

Using Javascript built pages, you could avoid 1) altogether.

Question is: is this such a big saving after all? I think that the
answer to that may be purely application specific. However, you've
also given me some grist to add to my mental mill.

Thanks
 
T

Thomas 'PointedEars' Lahn

Peter said:
Over the past while one of my work projects has amounted to an HTML page
that essentially looks like this:

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"
"http://www.w3.org/TR/html4/loose.dtd"> <html> <head> <title>one-page
client-side app</title> <script src="library.js"
type="text/javascript"></script> <script src="app.js"
type="text/javascript"></script> </head> <body> </body> </html>

The app.js file loads all the necessary data using Ajax and JSON. The
body is dynamically built based on this data.

Very accessible. NOT.
I don't understand this either. It should be easy to validate the HTML
either way.

As the W3C Validator documentation explains already, Validation alone is not
a mark of service quality. An empty document, a document that has no content
without scripting, is FUBAR. Unless, of course, it exists merely to
demonstrate a scripting feature or how FUBAR a such a document would be.
Make sure the company will never hire a disabled user that requires a
browser that does not support JavaScript.

But that is not it at all. Accessibility does accommodate users with
disabilities, but by far not only them.
It works.

It does not. The document is empty. For a user with disabilities, a search
engine, a user behind a filtering proxy, a user with a not so sophisticated
mobile device and so on. You are blinding yourself to the possibilities of
access to a Web application if you call this nonsense working.


PointedEars
 
V

VK

It does not. The document is empty. For a user with disabilities, a search
engine, a user behind a filtering proxy, a user with a not so sophisticated
mobile device and so on. You are blinding yourself to the possibilities of
access to a Web application if you call this nonsense working.

For the users with disabilities I would highly suggest do not follow
the W3C's approach when a bunch of healthy people (possible mental
disabilities being disregarded) are getting together to decide what is
most needed for people with disabilities.

We could make a cross-group discussion on the sub-subject in c.l.j.,
comp.human-factors and alt.comp.blind-users.
I would exclude ciwah because from my previous experience similar
discussions in there are attracting side spoilers - thus people w/o
disabilities but pretending to be such to enforce their opinions on
the subject. alt.comp.blind-users is more reliable because the
regulars can easily detect a "black sheep" in the thread.

From the current topics I see that Javascript is the least of concerns
of blind users:
http://groups.google.com/group/alt.comp.blind-users/browse_frm/thread/12b21f70d38c8845
http://groups.google.com/group/alt.comp.blind-users/browse_frm/thread/9ec3f68a4ef803dc
http://groups.google.com/group/alt.comp.blind-users/msg/d56704254dc18bcb

Yet let's us ask them rather then guess? OP might make a demo page
using the new approach he is thinking of so it could be visited by
people with disabilities for their feedback. It is not the entire
problem - as already was pointed out - but at least the sub-subject
about "Javascript and blind users" could be investigated.
 
T

Thomas 'PointedEars' Lahn

VK said:
For the users with disabilities I would highly suggest do not follow the
W3C's approach when a bunch of healthy people (possible mental
disabilities being disregarded) are getting together to decide what is
most needed for people with disabilities.

Please spare us your delusions about what the W3C is or is not. After
having read your W3C-related blog entry at <http://comvkmisc.blogspot.com/>,
nobody in their right mind would consider your statements to be relevant
anymore.
We could make a cross-group discussion on the sub-subject in c.l.j.,
comp.human-factors and alt.comp.blind-users.

Or we could simply call you an incompetent delusional troll.
I would exclude ciwah because from my previous experience similar
discussions in there are attracting side spoilers - thus people w/o
disabilities but pretending to be such to enforce their opinions on the
subject. alt.comp.blind-users is more reliable because the regulars can
easily detect a "black sheep" in the thread.

Talking about "black sheeps in the thread" is a joke when it comes from you,
and a bad one at that. You are evidently not capable of taking part in a
serious technical discussion; your inability or unwillingness to accept
proven facts as the truth, to break out from your little fantasy world is
too much of a hindrance for that.

That blind users or users with impaired vision might not recognize this as a
problem constitutes no evidence that there is no problem with this.
Yet let's us ask them rather then guess?

No thanks, you may indulge in your fantasies and get yourself scored down
and killfiled all by yourself.
OP might make a demo page using the new approach he is thinking of so it
could be visited by people with disabilities for their feedback. It is
not the entire problem - as already was pointed out - but at least the
sub-subject about "Javascript and blind users" could be investigated.

There is nothing to investigate there as nothing was said about blind users
in particular. That said, that text browsers usually do not support
client-side ECMAScript-compliant scripting and the APIs under discussion
here should be indication enough that there is a problem with an empty
document filled through these techniques for users with impaired vision.

I strongly suggest again you stay silent about things you don't have the
slightest clue of.


PointedEars
 
V

VK

Please spare us your delusions about what the W3C is or is not. After
having read your W3C-related blog entry at <http://comvkmisc.blogspot.com/>,
nobody in their right mind would consider your statements to be relevant
anymore.

This blog post is my comment on http://www.w3.org/TR/2007/WD-html-design-principles-20071126
which indeed gave me at the moment a hope for W3C so I planned to
comment on the further development. Alas hardcoded ones won again with
4.01 updates never accepted and HTML 5 pushed into "somewhere in a few
years or so or better never". It is very misfortune because after W3C
being inevitably faded out of the Web authorities list we're coming
back to the old situation of non-intermediated browser producer wars
with IE still hugely dominating on the market. Maybe it was a
strategic mistake of WHATWG to join W3C and giving up their HTML 5
working base as some "initial membership fee". By keep their original
position of a group of reasonable thinking technical specialists being
in opposition to a group of fanatic pedants: by saving this position
they could try to transfer the standardization authority from W3C to
WHATWG. Such transfer would be supported by many IMO. And now
themselves and their HTML 5 are buried in the regular endless XHTML,
"informational objects" and other useless crap. That is IMO - and what
exactly connection does it have with a site accessibility or
usability?



6. Thomas 'PointedEars' Lahn
View profile
Hide options Jun 1, 4:09 pm
Newsgroups: comp.lang.javascript
From: Thomas 'PointedEars' Lahn <[email protected]>
Date: Sun, 01 Jun 2008 14:09:16 +0200
Local: Sun, Jun 1 2008 4:09 pm
Subject: Re: Javascript on the client as an alternative to Perl/PHP/
Python on the server
Reply | Reply to author | Forward | Print | Individual message | Show
original | Report this message | Find messages by this author
For the users with disabilities I would highly suggest do not follow the
W3C's approach when a bunch of healthy people (possible mental
disabilities being disregarded) are getting together to decide what is
most needed for people with disabilities.

Please spare us your delusions about what the W3C is or is not. After
having read your W3C-related blog entry at <http://
comvkmisc.blogspot.com/>,
nobody in their right mind would consider your statements to be
relevant
anymore.
We could make a cross-group discussion on the sub-subject in c.l.j.,
comp.human-factors and alt.comp.blind-users.

Or we could simply call you an incompetent delusional troll.
I would exclude ciwah because from my previous experience similar
discussions in there are attracting side spoilers - thus people w/o
disabilities but pretending to be such to enforce their opinions on the
subject. alt.comp.blind-users is more reliable because the regulars can
easily detect a "black sheep" in the thread.

Talking about "black sheeps in the thread" is a joke when it comes
from you,
and a bad one at that. You are evidently not capable of taking part
in a
serious technical discussion; your inability or unwillingness to
accept
proven facts as the truth, to break out from your little fantasy world
is
too much of a hindrance for that.
That blind users or users with impaired vision might not recognize this as a
problem constitutes no evidence that there is no problem with this.

That is just hilarious. So, who cares what actual accessibility
problems user with disabilities are experiencing, right? They will
experience only problems, defined by a set of selected people, any
other problems are not allowed. :)
btw Amazon.com was always known for Javascript-independent design.
Turn the scripting off and try to shop - no problem. Yet it is a hate
target of visually impaired users. Anyone of "accessibility fighters"
here or at ciwah ever asked them why exactly? I didn't yet but in
these groups there are so many people who's heart is bleeding about
the accessibility, at least based on their posts. Did they ever
investigate the matter so do not repro it in their own solutions and
advises?
There is nothing to investigate there as nothing was said about blind users
in particular. That said, that text browsers usually do not support
client-side ECMAScript-compliant scripting and the APIs under discussion
here should be indication enough that there is a problem with an empty
document filled through these techniques for users with impaired vision.

For the starter one should investigate do such user consider
Javascript as an accessibility helper or an accessibility spoiler. A
link I gave suggests the first, but again: let's talk about it with
them. At least the idea that every one of them is on Lynx-like agent
is a nonsustaining urban legend INHO.
 
D

Dan Rumney

VK, PointedEars,

Please don't hijack this thread to bicker about accessibility.

It's abundantly clear that anyone using a UA that does not have
Javascript is not going to be able to access pages generated using the
model that I outlined in the original post.

I think a more fruitful discussion would focus on other, less obvious
aspects, which is why I'm seeking the thoughts of others.
 
T

Thomas 'PointedEars' Lahn

Dan said:
VK, PointedEars,

Please don't hijack this thread to bicker about accessibility.

You miss the point.
It's abundantly clear that anyone using a UA that does not have
Javascript is not going to be able to access pages generated using the
model that I outlined in the original post.

If that is the case it would still not sufficient as you will also have to
make sure that the script-enabled user agent supports the method you are
filling the document you redirect to before you redirect to it. This is not
always possible. Where it is, it introduces double maintenance that you
could have spared yourself if you had created an accessible document, only
enhanced with client-side scripting, in the first place.
I think a more fruitful discussion would focus on other, less obvious
aspects, which is why I'm seeking the thoughts of others.

Usenet is not a right.


PointedEars
 
T

Thomas 'PointedEars' Lahn

[Pardon my English in the message superseded by this one]

Dan said:
VK, PointedEars,

Please don't hijack this thread to bicker about accessibility.

You miss the point.
It's abundantly clear that anyone using a UA that does not have
Javascript is not going to be able to access pages generated using the
model that I outlined in the original post.

If that would be the case it would still not be sufficient. You will also
have to make sure that the script-enabled user agent supports the exact
method you are filling the document with that you redirect to, before you
redirect to it. This is not always possible. Where it is, it introduces
double maintenance that you could have spared yourself if you had created an
accessible document, only enhanced with client-side scripting, in the first
place.
I think a more fruitful discussion would focus on other, less obvious
aspects, which is why I'm seeking the thoughts of others.

Usenet is not a right.


PointedEars
 
P

Peter Michaux

Any solution that is dependent on Javascript for content generation
and manipulation will cause problems to those with old browser, text-
only browser, or those using screen readers. I wholly acknowledge this
fact, but I'm not going to dwell on it further as I feel that this has
probably been touched on by various other posters.

The danger isn't deciding that it is ok to depend on JavaScript. It is
*mistakenly* deciding it is ok to depend on JavaScript. I'm not saying
it is a bad decision but making a mistake in this area could be costly
later. Make sure the boss officially makes this decision because it is
basically a decision about investment vs accessibility.
By simple HTML editors, I mean things like Notepad, Crimson Editor,
HTML-Kit and the like. I'm not a big fan of Dreamweaver and other
'visual' editors, but I'd be the first to admit that I probably need
to get over that bias. I like to be able to understand the link
between what I'm seeing and the HTML that's being created.

I only use a text editor to do any work, either client or server side.
I would not consider using anything else which may be leading to my
confusion about your concern. This doesn't really affect this
discussion about software architecture. Different people use different
tools and the vi and emacs folks try to kill each other.

[snip]
I think your comments about browser bugs is a *very* useful one.
Server generated HTML will never be exposed to that. So long as it
generates good HTML, it will produce a sensible page. Writing
Javascript that is cross-browser compliant is a major pain (this I
know!)

In general, innerHTML works well and for generating complex bits of
page is faster and less buggy than using the standardized DOM methods
like document.createElement.

Not to diminish your comments, but I think I can summarise them as:
1) Javascript page building results in pages that are not widely
accessible

They are widely accessible because generally JavaScript is "on" in
browsers. They are not *as* widely accessible. This distinction
somewhat softens the blow but for a single user that is left out then
it makes no difference to that user.
2) Javascript page building is prone to web bugs

I think it's important to all readers to be aware of 1) and understand
its relevance to their solution (and, most of all, not underestimate
that importance)
I think that 2) is a good point, but can be solved with an appropriate
client-side framework.

Don't put too much blind faith in the "appropriate client-side
framework". The quality of the available mainstream JavaScript
libraries has been called into question here many times. Various basic
errors from not understanding how JavaScript works to using browser
sniffing when at least unnecessary (which may be always) have been
pointed out about these libraries on comp.lang.javascript. Some of
these libraries are labeled version 1+ which should be an
embarrassment to the authors. They would be better labeled version
0.0.1.
Which brings us back to one of your first points, albeit an implicit
point: find the right server-side framework.

client-side or server-side?

Many of the regulars on comp.lang.javascript do not believe you will
find a client-side library ready and appropriate to your project.
Great food for thought; thanks!

For the record: a white paper is broadly understood to be a relatively
short treatise or report aimed to educate people on a certain point or
to present a solution to an industry problem. The phrase may be tossed
around widely, but I think I was using it in its appropriate context
here.

I don't think you will find white papers on JavaScript. I just read
web pages, books and participate on comp.lang.javascript.

Peter
 
P

Peter Michaux

Very accessible. NOT.

I didn't claim it to be. This architecture was not completely my
choice. It was the result of business requirements.

This page is behind a login which changes the balance of pros and cons
in the decision making.
As the W3C Validator documentation explains already, Validation alone is not
a mark of service quality. An empty document, a document that has no content
without scripting, is FUBAR.

I know that acronym and that is simply your opinion about a subjective
topic.

When one decides to publish information on the web only, one is
requiring a reader to have a computer, an internet connection and web
browser (or similar program to get content off the web). That is
setting the bar quite high and expensive. For a interested person
without a computer the content completely inaccessible to them.

This establishes that the interested reader requires some set of
certain technology to access the information. In general web content
is published as HTML. Raw HTML is completely unintelligible to the
majority of readers without training in HTML. That means that for
practical purposes, most readers require a computer, internet
connection and a web browser like Firefox or Internet Explorer. The
point still being that a very specific set of technology is required
to access the information.

Adding CSS, JavaScript and cookie support to this list of requirements
follows the same path of logic. If JavaScript is or is not required to
access content, there is still some describable set of requirements to
access the content.

To say that requiring JavaScript for a page is a mistake is a
subjective remark. If a publisher is willing to discard all
potentially interested readers that do not have an internet connection
etc, then the publish could subjectively decide that JavaScript is a
requirement. The later decision about JavaScript requirement will
exclude less readers then the former decision about requiring an
internet connection.

I cannot see how a logical person could say that I am objectively
incorrect.

It has become quite clear over my time reading comp.lang.javascript
that you believe you know the right way to do things and that anyone
else doing it different is not just wrong but also an idiot. I think
that is a naive approach to assessing other's subjective decisions
without knowing all there decision making constraints.

We have also seen over time that you are doing some things objectively
wrong like serving XHTML as HTML.

[snip]
It does not.

It absolutely does work for many users.
The document is empty.

That is not an argument about anything.
For a user with disabilities,

There are other strategies for supporting disabled users other than
just a single page gracefully degrading. I'm sure you can think of at
least five other strategies off the top of your head.
a search
engine,

not all pages are to be indexed or even accessible to a search engine.
a user behind a filtering proxy,

A business decision may be willing to sacrifice these users.
a user with a not so sophisticated
mobile device

Not all pages need to work on a mobile device. As I've established
above, a web page will require some describable set of technologies to
access it. It could be that for a particular page the reader must have
a modern desktop computer with a web browser that has been released in
the past year with all the bells and whistles turned on.
and so on. You are blinding yourself to the possibilities of
access to a Web application if you call this nonsense working.

I think you are jumping to conclusions about my post and you have done
this with other posts of mine in the past. There is really no need to
do that. You could ask questions instead.

I also think you are missing the fact that I have pointed out here
that the necessary technology to read a web page is a far more
prohibitive restriction in terms of number of people that can read a
page than JavaScript off is. Even writing a page in only one human
language eliminates billions of potential readers.

Peter
 
D

Dan Rumney

The danger isn't deciding that it is ok to depend on JavaScript. It is
*mistakenly* deciding it is ok to depend on JavaScript. I'm not saying
it is a bad decision but making a mistake in this area could be costly
later. Make sure the boss officially makes this decision because it is
basically a decision about investment vs accessibility.

For the record, the specific tool that I own is an internal one. It is
designed to take the textual representation of a system's
configuration and present it in a graphical manner to allow for more
ready identification of configuration errors. I have no budget and no
time allowance for this.

Intellectually, I would love to address the issue of making this tool
available to screen-reader users, but as there is no current demand
for a solution to the problem and no resources for finding that
solution, I'm going to have to let it slide... investment 1
accessibility 0

[snip]
In general, innerHTML works well and for generating complex bits of
page is faster and less buggy than using the standardized DOM methods
like document.createElement.

I've come to that realisation myself. I've taken to generating HTML
via code less and less and now tend to pull data from the server in
XML format, run it through an XSL transformation to HTML and stick use
innerHTML to put it into the page... so much so, in fact, that I've
written a javascript object to do just that.
They are widely accessible because generally JavaScript is "on" in
browsers. They are not *as* widely accessible. This distinction
somewhat softens the blow but for a single user that is left out then
it makes no difference to that user.

Good point

Don't put too much blind faith in the "appropriate client-side
framework". The quality of the available mainstream JavaScript
libraries has been called into question here many times.

[snip]

So it would seem. I currently use one library downloaded from the
internet (zXML) for convenience. At the moment, I'd rather write my
own framework and fix the inevitable bugs than download someone else's
and have to decipher it before fixing the inevitable bugs
client-side or server-side?

I did mean server-side, here.

On of the advantages that I felt my model had was that the server-side
data processing is decoupled from its presentation. In your garde
variety Perl script, you have data processing mixed up with data
presentation to one degree or another.
Having static HTML pages and calling in the data via AJAX allows you
to develop your presentation pages in pure HTML-CSS-JS, with no Perl/
PHP/Python involved.

However, I see now that a well chosen server-side framework can
provide this functionality too. That was the point I didn't make very
well :)

However, this does bring me back to the point I made in an earlier
post. All the server frameworks that I know of require you to put
'magic tags' inside the HTML so that the server can process it and
replace those tags with data.
This means that your 'template' HTML files will never validate. You
can only test the results of the framework which are naturally
dynamic.

That's not to say that that's an insoluble problem.
 
V

VK

VK, PointedEars,

Please don't hijack this thread to bicker about accessibility.

IMHO it is not hijacking but branching on a sub-topic within the same
main topic. But I have no problem with shifting to a new topic like
"Javascript and accessibility" if more on the sub-topic will be
posted.
It's abundantly clear that anyone using a UA that does not have
Javascript is not going to be able to access pages generated using the
model that I outlined in the original post.

They will if you properly design NOSCRIPT redirect and/or warning
blocks.
I think a more fruitful discussion would focus on other, less obvious
aspects, which is why I'm seeking the thoughts of others.

1) Search recents about Ruby in this group for one (the problem with
script inserting script inserting ...)

2) Another one is specific for charsets above US-ASCII in Javascript
strings, especially in document.write. Maybe it is not your case.

3) Browser screen update mechanics may make _very_ big delay before
any content will get visible unless you are using properly context
releases over setTimeout. By now and IMO it is the most common mistake
made in Javascript / XHR intensive solutions.
 
T

Thomas 'PointedEars' Lahn

Peter said:
I didn't claim it to be.

You claimed it to be working. Generally.
This architecture was not completely my choice. It was the result of
business requirements.

That does not make the result a good one.
This page is behind a login which changes the balance of pros and cons in
the decision making.

That statement is not true in itself. Even an application that requires a
login could be exposed to a not so common environment.
I know that acronym and that is simply your opinion about a subjective
topic.

Diminishing an opinion by stating that the topic it discusses is subjective
is fallacious because all opinions are subjective. What matters is if an
opinion is well-founded or not. I think mine is, as I think I have
presented conclusive non-fallacious arguments to support it in the process.
When one decides to publish information on the web only, one is requiring
a reader to have a computer,

For suitable values of "computer". Computers come in different forms nowadays.
an internet connection

As much as it may surprise you, it is _not_ a necessity. Content can be
stored while the Internet connection is established and accessed later offline.
and web browser (or similar program to get content off the web).

I can accept that as an axiom of no greater value: You need a Web user agent
to access Web content.
That is setting the bar quite high

High for whom?
and expensive.

Not necessarily.
For a interested person without a computer the content completely
inaccessible to them.

This establishes that the interested reader requires some set of certain
technology to access the information. In general web content is
published as HTML. Raw HTML is completely unintelligible to the majority
of readers without training in HTML.

While that is true, it is entirely irrelevant to this discussion.
That means that for practical purposes, most readers require a computer,
internet connection and a web browser

See above.
like Firefox or Internet Explorer.

Naming examples to support an argument in an attempt to make believe that
there are no other examples that could contradict the argument is another
fallacy.
The point still being that a very specific set of technology is required
to access the information.

But that is entirely irrelevant to this discussion.
Adding CSS, JavaScript and cookie support to this list of requirements
follows the same path of logic.

A fallacious logic at that. The mere use of CSS, JavaScript or cookies does
not imply a necessity for support of these features. A document can be and
should be useful without CSS, client-side scripting and cookies.
If JavaScript is or is not required to access content, there is still
some describable set of requirements to access the content.

You are making conclusions based on false or irrelevant premises.
To say that requiring JavaScript for a page is a mistake is a subjective
remark.

Every remark is subjective.
If a publisher is willing to discard all potentially interested readers
that do not have an internet connection etc, then the publish could
subjectively decide that JavaScript is a requirement.

Which does not mean in any way that this decision is a reasonable one. We
are not discussing which decisions can be made but whether these decisions
are justified.
The later decision about JavaScript requirement will exclude less readers
then the former decision about requiring an internet connection.

Given that the latter requires the former in some way, that is a fallacy.
I cannot see how a logical person could say that I am objectively
incorrect.

That is because you do not see that your argumentation is filled with fallacies.
It has become quite clear over my time reading comp.lang.javascript that
you believe you know the right way to do things

At least I can *justify* my design decisions.
and that anyone else doing it different is not just wrong but also an
idiot.

It may be *your* impression, but that does not constitute evidence that it
is objectively true. BTW, such ad-hominem attacks, another fallacy, are not
going to increase the credibility of an already fallacious argumentation.
I think that is a naive approach to assessing other's subjective
decisions without knowing all there decision making constraints.

I do not need to know the constraints to show that a decision is not
well-founded.
We have also seen over time that you are doing some things objectively
wrong like serving XHTML as HTML.

Your fallacies are getting tiresome.
[snip]
It does not.

It absolutely does work for many users.

But not for as many if that path would not have been followed.
That is not an argument about anything.

The value of the information in an empty document is zero and can fulfill no
purpose but to show that there is no information. I would deem this to be a
Bad Thing when the intent is to transport information.
There are other strategies for supporting disabled users other than just
a single page gracefully degrading. I'm sure you can think of at least
five other strategies off the top of your head.

Yes, I can. However, I can also see the drawbacks that follow from them
and do weigh them against the greater advantages that follow from not
implementing them.
not all pages are to be indexed or even accessible to a search engine.

The truth of this statement depends on how one defines a search engine.
A business decision may be willing to sacrifice these users.

Which does not make this business decision a reasonable or economically
correct one. In fact, it makes it a rather dubious one if you consider that
it is seldom the case that an Web application is solely accessed from within
a local area network.
Not all pages need to work on a mobile device.

I included "not so sophisticated" for a reason.
As I've established above, a web page will require some describable set
of technologies to access it. It could be that for a particular page the
reader must have a modern desktop computer with a web browser that has
been released in the past year with all the bells and whistles turned on.

True. However, since that particular "page" would introduce a barrier that
all the other content does not, one has to reconsider whether it is
reasonable that this is the case or if it would instead be better if this
barrier would not be introduced.
I think you are jumping to conclusions about my post and you have done
this with other posts of mine in the past. There is really no need to do
that. You could ask questions instead.

I don't need to ask questions about particularities in order to make general
statements or show arguments to be fallacious.
I also think you are missing the fact that I have pointed out here that
the necessary technology to read a web page is a far more prohibitive
restriction in terms of number of people that can read a page than
JavaScript off is. Even writing a page in only one human language
eliminates billions of potential readers.

Apples, oranges.


PointedEars
 
M

Michael Wojcik

Dan said:
By simple HTML editors, I mean things like Notepad, Crimson Editor,
HTML-Kit and the like. I'm not a big fan of Dreamweaver and other
'visual' editors, but I'd be the first to admit that I probably need
to get over that bias.

Your bias is well-founded. Dreamweaver and the like are nasty black
boxes that produce lousy, often non-compliant HTML that's difficult to
maintain. Use an editor (with syntax highlighting and that sort of
thing, if you find it helpful) and write well-structured, concise,
elegant HTML, then validate it.

The principles of good programming apply to HTML, even though it's
just a declarative markup language. Separate concerns: structure your
HTML into functional areas, and separate out presentation (CSS) and
behavior (Javascript) from content (HTML). Emphasize readability: use
whitespace and comments. Identifiers, like style class names and
element IDs, should be meaningful.

These are not things that most "visual" editors will accommodate well.
At best, you'll be switching between "code" and "visual" views; so why
not simply operate in the former, since you're comfortable with it
already?

Visual editors also encourage WYSIWYG thinking, which leads to
inflexible layouts and poor rendering for users whose environments
don't match the author's. Working directly with the abstractions of
HTML and CSS encourages liquid layouts, because you're not looking at
one renderer's opinion of your page.
 
D

Dan Rumney

[snip]
They will if you properly design NOSCRIPT redirect and/or warning
blocks.

Good point. I was taking a limited view as to what 'access' was.
Certainly, they won't be able to gain anything fruitful from the
specific page that they've pointed their UA at. However, if I use a
NOSCRIPT element, I can ensure that they are informed as to the state
of the page and, ideally, directed to a page that provides comparable
functionality without the need for Javascript.
1) Search recents about Ruby in this group for one (the problem with
script inserting script inserting ...)

Thanks for the suggestion. I'll take a look
2) Another one is specific for charsets above US-ASCII in Javascript
strings, especially in document.write. Maybe it is not your case.

Not for me, but perhaps useful to other readers.
3) Browser screen update mechanics may make _very_ big delay before
any content will get visible unless you are using properly context
releases over setTimeout. By now and IMO it is the most common mistake
made in Javascript / XHR intensive solutions.

TBH, I've never seen this. I'm not refuting the possibility; do you
know of any sites which demonstrate this behaviour?
 
P

Peter Michaux

You claimed it to be working. Generally.


That does not make the result a good one.

It may be the very best result under certain business requirements.

[snip]
Diminishing an opinion by stating that the topic it discusses is subjective
is fallacious because all opinions are subjective. What matters is if an
opinion is well-founded or not. I think mine is, as I think I have
presented conclusive non-fallacious arguments to support it in the process.

yours may be well founded under certain business requirements. You may
not be considering other situations which result in other decisions.
For suitable values of "computer". Computers come in different forms nowadays.


As much as it may surprise you, it is _not_ a necessity. Content can be
stored while the Internet connection is established and accessed later offline.

So at some point an internet connection is required.

I can accept that as an axiom of no greater value: You need a Web user agent
to access Web content.


High for whom?

High for those without a computer, internet connection, and/or web
browser.
Not necessarily.

It certainly is expensive to own a computer, internet connection and
web browser. For some people going to a 10 peso/hour internet cafe in
Mexico is expensive.

[snip]
Which does not mean in any way that this decision is a reasonable one.

or an unreasonable one.

[snip]
At least I can *justify* my design decisions.

As can I.

[snip]
I do not need to know the constraints to show that a decision is not
well-founded.

Then, in my opinion, your decision making process is broken. You
cannot engineer something unless you know the requirements. To believe
that there is one solution for all situations is naive.

Your fallacies are getting tiresome.
http://groups.google.com/group/comp.lang.javascript/msg/48e28a4ea7ec2903




But not for as many if that path would not have been followed.

True. That may not be a net loss, however, when counting profit.

The value of the information in an empty document is zero and can fulfill no
purpose but to show that there is no information. I would deem this to be a
Bad Thing when the intent is to transport information.

The intent may not be to transport information in a document-like
format even though that was the original intent of the web. The web is
now being used as an application platform as well.
Yes, I can. However, I can also see the drawbacks that follow from them
and do weigh them against the greater advantages that follow from not
implementing them.

Do you agree that given certain business constraints perhaps you would
weight the advantages and disadvantages differently?

[snip]
Which does not make this business decision a reasonable or economically
correct one.

It does not make it an incorrect one either.
In fact, it makes it a rather dubious one

maybe or maybe not.
if you consider that
it is seldom the case that an Web application is solely accessed from within
a local area network.

A web application may be on the general internet with a notice it
requires CSS, image, JavaScript, Flash, Quicktime support etc. These
would all be reasonable requirements in some circumstances.
I included "not so sophisticated" for a reason.

They don't need to work on not so sophisticated mobile devices.

True. However, since that particular "page" would introduce a barrier that
all the other content does not, one has to reconsider whether it is
reasonable that this is the case or if it would instead be better if this
barrier would not be introduced.

You certainly are correct for some situations. For others the "wow
factor" that JavaScript provides may be the only reason a user decides
to use a particular website when others exist without JavaScript or
with JavaScript as progressive enhancements. In this situation using
JavaScript heavily may be exactly the reason a site is profitable.

[snip]

Peter
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,994
Messages
2,570,222
Members
46,810
Latest member
Kassie0918

Latest Threads

Top