ajax - dynamic update

S

steffen haugk

Hi there,
if this is a common or newbie question.
I would like to request some data from a server using ajax (search
type of thing). Whilst the server is searching, I would like to
display the first results on the client side. Ideally as they come
in.What is the best way to go about such a search?

I can imagine to write results on the serverside to a database, which
I can interrogate using ajax and add to the results through
javascript. then using ajax again look on the server if there are
newresults. This would work. Question: Is there a better way?

Thanks, Steffen
 
D

Darko

Hi there,
if this is a common or newbie question.
I would like to request some data from a server using ajax (search
type of thing). Whilst the server is searching, I would like to
display the first results on the client side. Ideally as they come
in.What is the best way to go about such a search?

I can imagine to write results on the serverside to a database, which
I can interrogate using ajax and add to the results through
javascript. then using ajax again look on the server if there are
newresults. This would work. Question: Is there a better way?

Thanks, Steffen

I'm not sure about the whole thing. Amongst other things, it depends
on respond format. If it is XML, which I would recommend, having
Javascript deliver builtin support for it, then you can't parse it
before the whole document is received.

Also, how big your data is? If it is so big that it's worth doing,
then are you sure you want to display it all on one page. And when I
say "so big" I mean, say, more than 100k. If it's less than that,
that'll probably come in one piece and be used only rarely, with slow
connections.
 
P

Peter Michaux

I'm not sure about the whole thing. Amongst other things, it depends
on respond format. If it is XML, which I would recommend, having
Javascript deliver builtin support for it, then you can't parse it
before the whole document is received.

Also, how big your data is? If it is so big that it's worth doing,
then are you sure you want to display it all on one page. And when I
say "so big" I mean, say, more than 100k. If it's less than that,
that'll probably come in one piece and be used only rarely, with slow
connections.

XML is a very bulky format.

JSON is less bulky and you can find ways to make it even less bulky
than a naive use

<URL: http://peter.michaux.ca/article/2652>

Peter
 
S

steffen haugk

I'm not sure about the whole thing. Amongst other things, it depends
on respond format. If it is XML, which I would recommend, having
Javascript deliver builtin support for it, then you can't parse
itbefore the whole document is received.
I could start off the search with one call, not even bothering about a
result. Then I could periodically check for results with an
XMLHttpRequest, similar to a news feed I could request more and more
results, until the search has finished. RSS is using XML. But I am not
bothered about the format.
Also, how big your data is? If it is so big that it's worth doing,
then are you sure you want to display it all on one page. And when I
say "so big" I mean, say, more than 100k. If it's less than
that,that'll probably come in one piece and be used only rarely, with
slow
connections.
It is not the size of the data, but the time it will take the search.
Maybe I shouldn't use the term 'search', think of the result as
solutions. The solutions might take some time to calculate, and as the
soltions are coming in, I would like to show them on the client side,
so the user doesn't have to wait until everything is finished. In
fact, the user could stop the generation of solutions by sending a
request to the server.

As an example have a look at
<http://www.eurobuch.com/index.php?lang=e>
you can search for books in second hand bookshops all over the world.
As the results are coming in, they are added to the list. There is
also a progress bar, and a line that displays the number of results.
That's what I have in mind. I am wondering if there is a standard way
of doing it.

Thanks, Steffen
 
D

Darko

I could start off the search with one call, not even bothering about a
result. Then I could periodically check for results with an
XMLHttpRequest, similar to a news feed I could request more and more
results, until the search has finished. RSS is using XML. But I am not
bothered about the format.> Also, how big your data is? If it is so big that it's worth doing,

It is not the size of the data, but the time it will take the search.
Maybe I shouldn't use the term 'search', think of the result as
solutions. The solutions might take some time to calculate, and as the
soltions are coming in, I would like to show them on the client side,
so the user doesn't have to wait until everything is finished. In
fact, the user could stop the generation of solutions by sending a
request to the server.

As an example have a look at
<http://www.eurobuch.com/index.php?lang=e>
you can search for books in second hand bookshops all over the world.
As the results are coming in, they are added to the list. There is
also a progress bar, and a line that displays the number of results.
That's what I have in mind. I am wondering if there is a standard way
of doing it.

Thanks, Steffen

Well, if that's the idea, then you shall probably have to have a
daemon-like
server-side process, that won't stop working after your request ends.
You would
then have two kinds of requests:
- Start the search (delivering parameters); the server side process
starts the search, request ends, process continues to work.
- Give me results found so far (probably put in a database or a file
in the meantime)

In my opinion, this isn't so bad, but requires having the privilege of
resident processes, written
in whatever language, probably c or c++, which is rarely seen from
hosting providers these days -
you would probably have to use your own computer or rent a virtual
dedicated or dedicated server
which is pretty expensive (e.g. several hundred Euros per month).
 
S

steffen haugk

Well, if that's the idea, then you shall probably have to have a
daemon-like
server-side process, that won't stop working after your request
ends.You would
then have two kinds of requests:
- Start the search (delivering parameters); the server side process
starts the search, request ends, process continues to work.
- Give me results found so far (probably put in a database or a
filein the meantime)
In my opinion, this isn't so bad, but requires having the privilege
ofresident processes, written
in whatever language, probably c or c++, which is rarely seen
fromhosting providers these days -
you would probably have to use your own computer or rent a
virtualdedicated or dedicated server
which is pretty expensive (e.g. several hundred Euros per month).

Hi Darko, thanks for your reply. I don't think it is that difficult.
Ajax is asynchronous, that's what it is all about. I can make one
request to start the search. The callback for that request will tell
me when the search has finished. In the meantime, results are added to
a database, entries tagged with the session id.

And then I can make intermediate requests to gather results so far.
The callback of the first request will stop these intemediate requests
being made.

What do you think? Steffen
 
R

Richard Maher

Hi Steffen,
I can imagine to write results on the serverside to a database, which
I can interrogate using ajax and add to the results through
javascript. then using ajax again look on the server if there are
newresults. This would work.

I imagine you'd experience some locking/concurrency issues with any number
of users attempting to write (and presumably delete - grow/shrink) rows into
the same section of a "scratch" table. Probably better throw in some sort of
house-keeping daemon as well to remove those orphaned rows that hang around
after ungraceful session exits.
This would work.

Depends on your definition of "work" I suppose :)
Question: Is there a better way?

Answer: Yes

But you won't like it 'cos it doesn't involve HTTP let alone Ajax. You need
a connection-oriented, context-rich, middleware protocol that wasn't
designed from the ground up to serve static web-pages when dodgy/flaky
network connections were the de rigueur.

The example I'll show you processes a result-set from a server and populates
an option collection in a <select> list. Every time a row arrives from the
server a Record Count is clicked over for the user to see, the Select List
"grows" until 5 rows have been received, and then its scroll-bar diminishes
as more rows arise.
It is not the size of the data, but the time it will take the search.
Maybe I shouldn't use the term 'search', think of the result as
solutions. The solutions might take some time to calculate, and as the
soltions are coming in, I would like to show them on the client side,
so the user doesn't have to wait until everything is finished.

Exactly, Parallelism! The JavaScript browser client can be enriching or
adding-value to the raw data as it arives, relieving the server from that
additional processing burden, and making use of the grunt on the client
front-end. Having recently read about Flash's FABridge functionality, I'm
very excited about the possibility of watching a Flex Pie or Line-Chart grow
before the client's eyes as the Data Binding transaltes the dataset getting
pushed through from JavaScript!
In
fact, the user could stop the generation of solutions by sending a
request to the server.

You mean like a "Hot-Abort Button"? As is perfectly illustrated in the
following example: -

http://manson.vistech.net/t3$examples/demo_client_web.html

Username: TIER3_DEMO
Password: QUEUE

I'll post more demo instructions at the end of this but, to see the bit you
want in action, just enter an asterix "*" for the Queue Name and then click
the green "Get Job Info" button. You'll see that the <select> list is
populated from the server, one row/element at a time. I have tested this
with up to 3000 rows and scalability doesn't seem to be an issue! The one
performance problem I experienced was the tear-down of the old/previous
option-collection before populating the results from the next query. Thanks
to RobG, the problem was solved with DOM Node Cloning and Replacing.

All of the client source code can be found at:-
http://manson.vistech.net/t3$examples/

QUEUE_LOOKUP.HTML contains the code you'd be interested in specifically the
jobLookup() and getResponse() functions, although following the selectRef
and selectClone objects through the code could be worthwhile. The driving
Applet is CornuCopiae.java and the object definition can be found in
CornuCopiae.html. (The main Socket stuff being in Tier3Socket.java) NB: All
Applet Java code is application-neutral and completely reusable. No Java
coding "need" be done for applications 2 to N.

If you'd prefer someone to have Mugabe(esque) totalitarian control over your
server interaction then I'd suggest Silverlight. (Or Flash's Data Management
Services - "All client-resident data in sync" - Yeah right. But at least
with Flash (and obviously Java) you get the choice!)

Although my code is, at present, VMS-specific you could achieve similar
results with simple INETd server processes, if you dropped the authorization
and were happy with one server process per user.

Cheers Richard Maher

PS. The code doesn't automatically check for versions, but does work on
recent versions of Mac OS X Safari (1.5 JDK), Firefox, Windows Firefox and
IE 6 and 7, Opera, Linux and Firefox. You must have JavaScript enabled,
Applets enabled and a recent JVM. You also can't be behind a firewall that
bans outgoing connections unless you open up 5255.

Here's some of the functionality-catwalk highlights from the example: -

1) Full, one-time, context-specific, VMS User Authentication. No Cookies,
Session IDs, Password Caching or generic Work-Station or Browser
credentials! When you load the demo_client_web.html page into your browser,
a Java Applet is automatically activated that prompts the user for their VMS
Username and Password via a modal dialogue box. If authorization fails, the
"Access Denied" page will be displayed and VMS Intrusion Detection (in
accordance with the policy set out by your System Manager) will be enforced,
and Login-Failures is incremented in SYSUAF. Alternatively, if authorization
is successful (and you left the "Display Logon Confirmation" box ticked)
then a Welcome dialog box will be displayed detailing last login times and
the number of unsuccessful login attempts (if any). Login-Failures is now
set to zero and last non-interactive login time is set to the current time.

If you refresh this page, or move to a different page, then the server
connection is broken and you must be re-authorised before continuing to
access the Demo Queue Manager application.

2) A Hot-Abort button! After you have pressed the "Get Job Info" button
you'll notice that the "Abort Request" button becomes active and turns red.
(Actually you probably won't notice 'cos this query completes too quickly
:) You can edit the DEMO_UARS.COB code and change the value of the
DEBUG_DELAY field if you want to see your 3GL Interrupt routine in action.)
In this case the cancel-flag I've set in the AST routine is picked up in the
mainline code, resulting in the graceful termination of the loop that
controls "next queue" (or "next row") retrieval.

Also, if you look at the getResponse() function in query_lookup.html, you
will see how the chan.setTimeout() method has been deployed to provide an
erstwhile "blocking" socket Read with the ability to surrender the
event-thread for things like processing the Abort button and ticking over
the clock. (all of this, and much more, "infrastructure-code" is already
there and doesn't have to be re-invented)

3) Predictive text on the Queue Name field so that all matching VMS queues
are retrieved on-demand as the user types. As is now common-place with many
websites, a drop down select list of matching options is automatically
retrieved from the server and made available for the user to select from.

4) Result-set drill-down. Many database queries return a result-set of rows
for the user to scan through and possibly drill-down into for more detail.
I've provided a reasonably generic example of this, where all matching Job
Entries have been populated into a dynamic HTML select list. Once again the
user was able to see the select-list grow, the scroll-bar diminish, and
"Jobs Found" field tick over in real-time, whilst continually being
empowered (by the Abort button) to curtail the results at any time!

If you click on an entry in the Select List then the <frame> changes and the
entry_details.html page appears. See the parent.entry_details.getReady()
call in queue_lookup.html to see how the handover to the new frame takes
place. (Also see goBack() in entry_details.html to see how simply that
operation is reversed.)

The user is now free to move forward, back, first, last, refresh, and delete
queue entries, or return to the previous frame. (Thanks to the deployment of
the VMS Persona functionality, the user is only permitted to see those queue
entries that the Username they signed in under is permitted to see. They can
also *only* delete those entries that this username is allowed to delete.)

5) Floating <div>s. You'll see that any queue names are highlighted in bold
and italics; if you mouseover any of these fields when they are not blank
then the current status information for that queue will be retrieved from
the server and displayed in a quasi-popup DIV.

6) Local Result-Set Sort. If you click on the "header" or "first" row in the
Select List of queues, you will get a popup prompting you for a sort key. If
you select one, the contents of the Select List are sorted in the chosen
order. (Try enter "*" for the Queue Name and then clicking "Get Job Info" to
get some data worth sorting)
 
S

steffen haugk

Hi Richard,

thank you for your reply. I had a look at the website and like the way
you can control the service. tier3 seems to be doing everything i am
asking for.

I think the trouble is, I don't have an isp that runs vms, nor do i
think i myself am capable of programming confidently for this platform.
I imagine you'd experience some locking/concurrency issues with any
number of users attempting to write (and presumably delete -
grow/shrink) rows into the same section of a "scratch" table.
this will indeed be a problem. not quite sure i will handle that. best
way would be to wait for the initial request to finish and then to
fire the tidy-up request after I have stopped the requests for more
data:

- start search[1]
- start loop[2] to get results
- when [1] finishes, stop [2] and tidy-up[3]
But you won't like it 'cos it doesn't involve HTTP let alone Ajax.
i am not keen on ajax, but it seems to be /a/ way to do it.
You mean like a "Hot-Abort Button"? As is perfectly illustrated in
thefollowing example: -
yes indeed.

Again, thank you for taking the time to write this all up. I
downloaded the pdf and read through, way over the top of my head.

If i ever wanted to use tier3 (not sure about the licensing model) i
would have to set up a box at home running vms, and i am not sure at
the moment i want to make it that serious. but i might in the future,
you never know.

regards, steffen

PS

The xmlhttprequest object has a readyState property, i wonder what 3
(Receiving) means, documentation says 'Some data has been received'.
Is it as simple as that? I shalll investigate.
 
R

rf

Randy Webb said:
steffen haugk said the following on 11/23/2007 8:48 AM:



It means some has been received. What you don't know from it is how much
has been received. It could be anywhere from 0.00000001% of the request
all the way up to 99.99999999999% of the request.

<pedantic>
If (Receiving) actually at any time meant that 99.99999999999% of the
request had been received, and there were only one byte remaining, then we
would already have received ten thousand thousand million bytes (give or
take an order of magnitude and the odd byte). The final byte takes us to
100% :)
</pedantic>
 
B

Ben Gun

I have tried it. I used setInterval with 10 seconds, and tested for
readyState=3. Instead of printing out the responseText, I printed the
current time and the number of lines in responseText:

14:54:22:: 821
14:54:31:: 1973
14:54:41:: 3465
14:54:51:: 6076
14:55:01:: 8314
14:55:11:: 10552
14:55:21:: 12790
14:55:31:: 15028
14:55:41:: 17639
14:55:51:: 19877
14:56:01:: 21937
14:56:11:: 24331
14:56:21:: 27067
14:56:31:: 29119

You can see how responseText gets longer with every call.

I have tested on Mac:

Firefox Yes
Opera Yes
Safari Yes

and Windows:

Firefox Yes
Opera Yes
IE No

Unfortunately I couldn't test on Linux.

I have also tried to get away from setInterval, and use the
onReadyStateChange event. This is indeed fired several times in
Firefox, but not the other browsers.

Hope IE support this readyState = 3 soon, otherwise, what is the point
of it?

Cheers
 
R

Richard Maher

Hi Steffen,
thank you for your reply. I had a look at the website and like the way
you can control the service. tier3 seems to be doing everything i am
asking for.

Thanks for taking the time to look at it, and the feedback!
I think the trouble is, I don't have an isp that runs vms, nor do i
think i myself am capable of programming confidently for this platform.

Yeah, that's a bit of a bummer :) We have looked at porting to Linux, but
as Tier3 multi-threads it's communication-servers through the use of
Asynchronous System Traps (ASTs - analogous to Windows Asynchronous
Callbacks) rather than threads, the port would not be that straight forward
:-(

Two things worth pointing out here: -

1) Your ISP can still serve up your web-pages hosted on a UNIX or Windows,
platform while the Applet can connect back to an entirely seperate
Application Server that may not be running a crappy http web-server at all.

2) If you're willing to forgo the client authentication, and are also
willing to support 1:1 server processes per clients then you could just use
INETd rather than Tier3 and the client code would change very little.
The xmlhttprequest object has a readyState property, i wonder what 3
(Receiving) means, documentation says 'Some data has been received'.
Is it as simple as that? I shalll investigate.

This sounds promising especially in the light of Ben Gun's post. If you can
find something that can parse incomplete results (XML?) and remember where
it finished last time then you're away. XHR objects also have that abort()
method I believe. I suspect it just cancels the client side and leaves the
server trundling through results, but better than nothing I guess?

Cheers Richard Maher

steffen haugk said:
Hi Richard,

thank you for your reply. I had a look at the website and like the way
you can control the service. tier3 seems to be doing everything i am
asking for.

I think the trouble is, I don't have an isp that runs vms, nor do i
think i myself am capable of programming confidently for this platform.
I imagine you'd experience some locking/concurrency issues with any
number of users attempting to write (and presumably delete -
grow/shrink) rows into the same section of a "scratch" table.
this will indeed be a problem. not quite sure i will handle that. best
way would be to wait for the initial request to finish and then to
fire the tidy-up request after I have stopped the requests for more
data:

- start search[1]
- start loop[2] to get results
- when [1] finishes, stop [2] and tidy-up[3]
But you won't like it 'cos it doesn't involve HTTP let alone Ajax.
i am not keen on ajax, but it seems to be /a/ way to do it.
You mean like a "Hot-Abort Button"? As is perfectly illustrated in
thefollowing example: -
yes indeed.

Again, thank you for taking the time to write this all up. I
downloaded the pdf and read through, way over the top of my head.

If i ever wanted to use tier3 (not sure about the licensing model) i
would have to set up a box at home running vms, and i am not sure at
the moment i want to make it that serious. but i might in the future,
you never know.

regards, steffen

PS

The xmlhttprequest object has a readyState property, i wonder what 3
(Receiving) means, documentation says 'Some data has been received'.
Is it as simple as that? I shalll investigate.
 
B

Ben Gun

I have tested on Mac:

Firefox Yes
Opera Yes
Safari Yes

and Windows:

Firefox Yes
Opera Yes
IE No

Unfortunately I couldn't test on Linux.
Tried Firefox on Linux now, and it worked just fine.

PS i am the OP.
 
B

Ben Gun

1) Your ISP can still serve up your web-pages hosted on a UNIX or Windows,
platform while the Applet can connect back to an entirely seperate
Application Server that may not be running a crappy http web-server at all. Interesting.

2) If you're willing to forgo the client authentication, and are also
willing to support 1:1 server processes per clients then you could just use
INETd rather than Tier3 and the client code would change very little.
I don't need client authentication.
This sounds promising especially in the light of Ben Gun's post. If you can
find something that can parse incomplete results (XML?) and remember where
it finished last time then you're away.
Either the result set isn't that long, or it includes consecutive
numbering.
XHR objects also have that abort()
method I believe. I suspect it just cancels the client side and leaves the
server trundling through results, but better than nothing I guess?
Have to test that.
Cheers Richard Maher
Thanks again.
 
R

Richard Maher

Hi Ben,
all.
Interesting.

Yes it is! This is a very powerful feature especially when you consider
that, as long as you're connecting back to the codebase (where the Applet
was served up from) then no certificate signing is required. You're
document-base/web-server can be on an entirely seperate network or
geographic location to your Application Server that is being front-ended by
your Applet.

A word of caution here; obviously your Applet has to be served up to the
browser(s) by something. If you don't want to use a http webserver then FTP
is acceptable. Me? I've written a lightweight ultrafast Applet-Uploader that
talks restaurant HTTP and dumps the applet down the line in up to 64K
chunks. This works great for Applets and octet-streams but unfortunately
can't serve up test/html (or text/anything) as a broken line (not terminated
by LF) causes the browser to barf :-(
I don't need client authentication.

INETd is easy then.
Have to test that.

Please let me know how you get on.

Cheers Richard Maher.

PS. Shame about lack of IE support. (But that's what happens when you try to
make a silk purse out of HTTP :)
 
B

Ben Gun

Hi Richard,
INETd is easy then. good


Please let me know how you get on.

Funny I had posted a follow up that same day, from a different
computer, but it seemed to have got lost. Here is what I wrote:
++++++++++++++++++++++++++++++++++++++++

Just did. The abort does not only cancel the client side, but the
server also does what is asked of it:

- abort the send() algorithm
- set the response_entity_body to "null"
- cancel network activity
- set STATE to "done" or "unsent", as the case might be
- set send() flag to "false"
- do or do not dispatch readstatechange event, as the case might be


However, it does not automatically kill your script. What I have done
is I checked periodically with

<?php
if (connection_aborted())
{
return(0);
}
?>

My algorithm comprises of one recursive function, and this bit of
code, inserted at the beginning, does the job (The way I tested was to
watch the httpd process in the activity monitor on the server).

And does it work?:
Mac OS:
Safari Yes
Opera Yes
Firefox Yes

Windows:
IE Yes
Opera Yes
Firefox Yes

Linux:
Firefox Yes

What I have tested here are the browsers' capability of sending an
abort() request. What I really need to see is how webservers handle
this. This might actually be beyond my possibilites. I have to see
what my webhosting service says - especially to my recursive function!

So the abort() works, the partial results work with the exception of
IE.

The interesting bit is that although IE throws an error on requesting
a result after abort(), it actually returns a partial result. That
looks promising. I have to see if I can periodically get my hands on a
partial result, error or no.

+++++++++++++++++++++++++++++++++++++++++++
Cheers Richard Maher.

PS. Shame about lack of IE support. (But that's what happens when you try to
make a silk purse out of HTTP :)
It wasn't such a bad thing in the early 90s, don't you think. I
remember at the time I was quite happy with gopher. Who needs www? Do
you remember the <ISINDEX> tag? Actually, I will try and see what
browsers still support it. I am sure server do. But browsers? How do
they render that?

Cheers, Ben
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
474,147
Messages
2,570,833
Members
47,380
Latest member
AlinaBlevi

Latest Threads

Top