navigator.userAgent required to set location.href in Safari?

P

Peter Michaux

Setting the location.href state in Safari sends the browser into a
indefinite loading state [1]. This has been discussed here before [2]
with no solution offered that I can find in the archives. One of the
Webkit developers on IRC directed me to a web page [3] with this
clever and useful solution[4].

function setHash(hash) {
if (navigator.userAgent.match(/Safari/i)) {
if
(parseFloat(navigator.userAgent.substring(navigator.userAgent.indexOf('Safari')
+ 7)) < 412) {
// the form doesn't need to be attached to the document.
var f = document.createElement('form');
f.action = hash;
f.submit();
} else {
var evt = document.createEvent('MouseEvents');
evt.initEvent('click', true, true);
var anchor = document.createElement('a');
anchor.href = hash;
anchor.dispatchEvent(evt);
}
} else {
window.location.hash = hash;
}
}

The navigator.userAgent sniff is, of course, repugnant and I don't
know if there is a way to directly feature test or use multiple object
inference to determine version number. The time required to build
versions of webkit and safari from source code for version around 412
seems unwarranted when the above code "works".

It is interesting to think that in this case there may not be any
other tests that could be used. It could be that version 412 only
changed the navigator.userAgent string, broke the old workaround and
enabled the new workaround. Such a minimal change set may require use
of navigator.userAgent.

Peter

[1] current nightly webkit builds are fixed
[2] <URL:
http://groups.google.com/group/comp...=location.href+safari&rnum=3#6f6d9106694b0e6c>
[3] <URL: http://swfaddress.svn.sourceforge.n.../js/swfaddress.js?revision=149&view=markup[4] rewritten here with no external dependencies.
 
R

Richard Cornford

Setting the location.href state in Safari sends the browser into
a indefinite loading state [1].

Nonsense. The thing that would send those Safari versions into an
indefinite loading loop would be unconditionally assigning a value
to the href (or do you mean hash here) each time the page loads.
And it will not only be Safari that would go into a loop under
those circumstances (for example, older versions of Opera has the
same behaviour, and it has been observed in other browsers).

There is no technical basis for assuming that assigning to any property
of the location object will not result in the page being re-loaded, and
historically there a plenty of examples where the reloading of the page
is precisely what does happen with such assignments.
This has been discussed here
before [2] with no solution offered

The solution has been offered, but people prefer not to listen.
that I can find in the archives. One of the Webkit
developers on IRC directed me to a web page [3] with
this clever and useful solution[4].

It looks like an unreliable hack to me.
function setHash(hash) {
if (navigator.userAgent.match(/Safari/i)) {
if
(parseFloat(navigator.userAgent.substring(
navigator.userAgent.indexOf('Safari') + 7)) < 412) {
// the form doesn't need to be attached to the document.
var f = document.createElement('form');
f.action = hash;
f.submit();
} else {
var evt = document.createEvent('MouseEvents');
evt.initEvent('click', true, true);
var anchor = document.createElement('a');
anchor.href = hash;
anchor.dispatchEvent(evt);
}
} else {
window.location.hash = hash;
}

}

The navigator.userAgent sniff is, of course, repugnant
and I don't know if there is a way to directly feature
test or use multiple object inference to determine
version number.

The time required to build versions of webkit and safari
from source code for version around 412 seems unwarranted
when the above code "works".

It is interesting to think that in this case there may not
be any other tests that could be used. It could be that
version 412 only changed the navigator.userAgent string,
broke the old workaround and enabled the new workaround.
Such a minimal change set may require use of
navigator.userAgent.
<snip>

In trying to solve a problem it is usually a good idea to carry out some
sort of analyse of the problem, and preferably one that goes beyond the
superficial. Why is the page being re-loaded when you set the - hash -
value a problem at all? Setting a - hash - value is just an instruction
to navigate to a fragment identifier within a page, and while the page
re-loading before it does that navigation may not be exactly desirable
it does not prevent the user from ending up navigated to the fragment
identifier in question.

But navigating to a fragment identifier is not what this code is about,
is it? What we have here is a manifestation of the realisation by people
who attempt to use AJAX dynamic loading to create web sites that the
products of their creation can no longer be bookmarked, where being
bookmarked is fundamentally desirable feature of most web site-like
entities. So someone proposes storing contexts information for the AJAX
system (a value that will encapsulate the page's current context (and
from which that context can be re-constructed)) in the - hash - of the
page, having observed that setting the hash to a non-existent fragment
identifier in some known browsers does not result in the page being
re-loaded. And then as their creations are exposed to browsers that were
previously beyond their experience they discover that some do re-load
the page, and so they look for hacks to add to their code to deal with
each new browser encountered.

But now we are looking at the way in which individual browser respond to
the setting of - hash - values, and are three steps past the
identification of the real problem. AJAX is a great technology for web
applications, but applications are not things where the notion of
bookmarking some point in the process really makes sense (saving your
work at some point, maybe, but not bookmarking). And AJAX is a great
technology for assistive widgets (there is nothing wrong with presenting
the user with a list of the available options to choose from as they
type). But AJAX is the wrong technology for the web site concept, not
least because they are something where the concept of bookmarking a
particular 'page' is normal, expected and desirable.

The solution is not some unreliable hack to allow bookmarking, and then
piling on more unreliable hacks when the features of the first hack that
made it unreliable become evident. All that approach will ever do is
reduce the manifestations of unreliability below some arbitrary level of
observability. The real solution to the problem is to have the design
process choose the technology that is appropriate to the task that is to
be achieved. People have been making reliable, bookmarkable, search
engine-friendly web sites for a decade or more now, and there are dozens
of mature technologies for assisting in that process. There is no good
reason why the viability of a web site should be hanging on whether an
arbitrary sequence of characters contains a particular sub-string in a
particular location in code branch inside a javascript function.

Richard.
 
P

Peter Michaux

Setting the location.href state in Safari sends the browser into
a indefinite loading state [1].

Nonsense. The thing that would send those Safari versions into an
indefinite loading loop would be unconditionally assigning a value
to the href (or do you mean hash here) each time the page loads.

I do mean hash. Doing either of the following

window.location.hash='foo';
window.location.hash='#foo';

in Safari cases the refresh button to turn into a stop button (which
cannot be stopped) and the title of the page begins "Loading".

And it will not only be Safari that would go into a loop under
those circumstances (for example, older versions of Opera has the
same behaviour, and it has been observed in other browsers).

There is no technical basis for assuming that assigning to any property
of the location object will not result in the page being re-loaded, and
historically there a plenty of examples where the reloading of the page
is precisely what does happen with such assignments.
This has been discussed here
before [2] with no solution offered

The solution has been offered, but people prefer not to listen.

I get the feeling you are implying there is no solution so the
solution is to avoid the problem with different design.

that I can find in the archives. One of the Webkit
developers on IRC directed me to a web page [3] with
this clever and useful solution[4].

It looks like an unreliable hack to me.

Admittedly it doesn't look rock solid which is partly why I posted it;
however, if a job needs to be done and can be done in a particular way
on the main browsers then some people insist on doing so.

<snip>

In trying to solve a problem it is usually a good idea to carry out some
sort of analyse of the problem, and preferably one that goes beyond the
superficial. Why is the page being re-loaded when you set the - hash -
value a problem at all? Setting a - hash - value is just an instruction
to navigate to a fragment identifier within a page, and while the page
re-loading before it does that navigation may not be exactly desirable
it does not prevent the user from ending up navigated to the fragment
identifier in question.

Safari doesn't reload the page. It goes into an infinite loading state
of sorts. It's not loading anything but is locked up in a minor sort
of way. That is, the stop button and the title are not working.

But navigating to a fragment identifier is not what this code is about,
is it? What we have here is a manifestation of the realisation by people
who attempt to use AJAX dynamic loading to create web sites that the
products of their creation can no longer be bookmarked, where being
bookmarked is fundamentally desirable feature of most web site-like
entities.

Indeed it is desirable. It isn't just bookmarking that is important.
It is emailing links. People expect to be able to copy and paste what
is in the URL bar of the browser and have it just work. The URL is the
only place this data can be stored and work in an email. Google and
Yahoo maps benefit from this however one uses setting location.hash
with all page changes which causes the Safari problem. The other has a
"get permanent link" button that goes the the page with a bookmarkable
address. This is technically superior but my mother would never figure
that out. Keeping the URL bar up to date is superior UI.

[snip]
But now we are looking at the way in which individual browser respond to
the setting of - hash - values, and are three steps past the
identification of the real problem. AJAX is a great technology for web
applications, but applications are not things where the notion of
bookmarking some point in the process really makes sense (saving your
work at some point, maybe, but not bookmarking).

"Bookmarking" the state of an application is valuable in applications
and if more applications did this we would find this feature very
useful. Although it is not quite the same, when Firefox crashes I'm
happy that it restores my session just how it was when it crashed.
(Yes I know that could mean it restores the crashing state but that
isn't usually the case.) I wish I new how to save a Firefox state so I
could start up in "work" or "home" states with the right tabs open to
the right urls. I think it can be done but I haven't looked into it.
This would also be a nice feature in something like photoshop where
different image editing processes require a different set of palettes
to be open. Save the state of an application is a great idea to me.
And AJAX is a great
technology for assistive widgets (there is nothing wrong with presenting
the user with a list of the available options to choose from as they
type). But AJAX is the wrong technology for the web site concept, not
least because they are something where the concept of bookmarking a
particular 'page' is normal, expected and desirable.

Not all web sites are the "web site concept", I suppose. The back end
tool of a CMS can be complex and having it more like an application or
several applications on a few pages has show itself to be a very
useful and appreciated design for the user. Now if bookmarking state
can be thrown in then all the better for faster get-to-where-I-want-to-
go time.
The solution is not some unreliable hack to allow bookmarking, and then
piling on more unreliable hacks when the features of the first hack that
made it unreliable become evident. All that approach will ever do is
reduce the manifestations of unreliability below some arbitrary level of
observability. The real solution to the problem is to have the design
process choose the technology that is appropriate to the task that is to
be achieved. People have been making reliable, bookmarkable, search
engine-friendly web sites for a decade or more now, and there are dozens
of mature technologies for assisting in that process. There is no good
reason why the viability of a web site should be hanging on whether an
arbitrary sequence of characters contains a particular sub-string in a
particular location in code branch inside a javascript function.

Agreed. Developing a web application has to be in the confines of what
a web application can do. I still think the idea of a bookmarkable and
emailable web application state is a great idea. Perhaps the powers
that be need to introduce an official way to use location object to do
this.

Peter
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,995
Messages
2,570,226
Members
46,815
Latest member
treekmostly22

Latest Threads

Top