XmlHttpRequest

G

Gaz

In Internet Explorer 6 I'm having a problem with the httprequest
object. I use it to call a webservice and display the result in the
readystate event handler. This works the first time I call it but
subsequently the readystate event handler function is never called, or
the event itself is never fired.

The same code works fine in Firefox. Changed it around a lot to no
effect Can anyone help?

I'm not supposed to create a new XmlHttpRequest object every time I
want to call a remote procedure am I?

Hope you don't mind the long post but here's the code it's simple
enough:

<!doctype html public "-//W3C//DTD HTML 4.0 Transitional//EN">
<html>
<head>
<title> New Document </title>

</head>

<script type="text/javascript">

//initiates the XMLHttpRequest object
//as found here: http://www.webpasties.com/xmlHttpRequest
function getHTTPObject() {
var xmlhttp;
/*@cc_on
@if (@_jscript_version >= 5)
try {
xmlhttp = new ActiveXObject("Msxml2.XMLHTTP");
} catch (e) {
try {
xmlhttp = new ActiveXObject("Microsoft.XMLHTTP");
} catch (E) {
xmlhttp = false;
}
}
@else
xmlhttp = false;
@end @*/
if (!xmlhttp && typeof XMLHttpRequest != 'undefined')
{
try {
xmlhttp = new XMLHttpRequest();
} catch (e) {
xmlhttp = false;
}
}
alert("instantiated");
return xmlhttp;
}

var Request = getHTTPObject();

function GetTreeOutput()
{
try
{
if( Request != null)
{
if (Request.readyState == 4 || Request.readyState == 0)
{
var sUrl = "http://itsopt-15/webgame/Service1.asmx/HelloWorld";

Request.onreadystatechange = GetTreeOutput_Response;

Request.open("POST", sUrl, true);

Request.send(null);
}
}
else
{
}
}
catch( e)
{
}
}

function GetTreeOutput_Response()
{
try
{
if( Request.readyState == 4)
{
if( Request.status == 200)
{
document.getElementById( "test").innerHTML = Request.responseText;
}
else
{
}
}
}
catch(e)
{
}
}
</script>

<body>

<input type="button" value="One" onclick="javascript:GetTreeOutput();"
/>
<input type="button" value="two" onclick="javascript:GetTreeOutput();"
/>
<input type="button" value="Three"
onclick="javascript:GetTreeOutput();" />

<div id="test" style="border: 1px solid Black; background-color: #ddf;
color: #003;">
</div>

</body>
</html>
 
D

Daniel Kabs

Hello Gaz,

thanks for that helpful code snippet.

A small nitpicking though:
<input type="button" value="One" onclick="javascript:GetTreeOutput();"/>

I think, you should leave the "javascript:" from the onlick handler. It is
not needed or rather is is incorrect to place it there because only
javascript code is allowed, as I know.

Cheers
Daniel Kabs
Germany
 
D

Daniel Kabs

Hello Martin,

If I change the request URL to a different host than the one I fetched the
HTML page from, sending the request
- Mozilla throws a "Permission denied" exception
- Internet Explorer asks me if I want to retrieve data from a foreign host.

How can I work around this?

Cheers
Daniel Kabs
Germany
 
D

Daniel Kabs

Gaz said:
try {
xmlhttp = new XMLHttpRequest();
} catch (e) {
xmlhttp = false;
}

In my opinion, setting xmlhttp to "false" is not a good idea, because if you
*get* the Request Object, IE can not compare it with false:
alert(Request == false)
gives an error "object does not support..."

So I'd set xmlhttp to "null" to flag an error condition (that also goes well
together with your test in GetTreeOutput() "if( Request != null)").

Cheers
Daniel Kabs
Germany
 
M

Martin Honnen

Daniel said:
Hello Martin,




If I change the request URL to a different host than the one I fetched the
HTML page from, sending the request
- Mozilla throws a "Permission denied" exception
- Internet Explorer asks me if I want to retrieve data from a foreign host.

How can I work around this?

That is not code I have posted. You are seeing the results of the same
origin policy as it is applied to fetching XML data, with normal
settings your code loaded via HTTP is not allowed to load data from a
different server.
IE with its security zone model might allow it if the zone is configured
to ask whether the request is allowed, if you want to change that for IE
you can I think even configure it to allow to access data across domains
but that is a security risk in general.
With Mozilla in a HTTP context you would need signed script, if you load
the page with the script making the request locally from the file system
you can have the script request the privilege from the user e.g.
if (typeof netscape != 'undefined' && typeof netscape.security !=
'undefined') {

netscape.security.PrivilegeManager.enablePrivilege('UniversalBrowserRead');
httpRequest.open(...);
httpRequest.send(...)
}
 
D

Daniel Kabs

Hello!

Martin said:
That is not code I have posted.

I know, but I thought you might shed light on that question and you did :)
You are seeing the results of the same
origin policy as it is applied to fetching XML data, with normal
settings your code loaded via HTTP is not allowed to load data from a
different server.

I don't see any problem, why it could be a security risk to allow scripts to
access data across domains. Quite the contrary, I think scripts should be
allowed to gather data from different hosts, e.g. the web server, database
server, and so on.

What did I miss here?

Cheers
Daniel
 
M

Martin Honnen

Daniel Kabs wrote:

I don't see any problem, why it could be a security risk to allow scripts to
access data across domains. Quite the contrary, I think scripts should be
allowed to gather data from different hosts, e.g. the web server, database
server, and so on.

It can surely be desirable and not be a security risk if a certain
script can access data from certain different servers but in general if
example.com for instance uses XML data and an XSLT stylesheet to output
its data to its users then it could be a security risk if a script from
the unrelated example.org is simply able to present the data on its own
site perhaps spoofing example.com.
The same origin policy in client-side scripting certainly for some
people prevents uses which are not doing any harm but if you want to be
able to prevent abuse then you have to make a compromise and that is in
this case a rather rigid policy as a browser can usually not decide
whether example.com and example.org would agree on data exchange between
the sites.
Mozilla for web services has tried to suggest a new model where any site
can publish a file which declares which hosts have what access but this
has not really find wide spread application as far as I know:
<http://lxr.mozilla.org/mozilla/source/extensions/webservices/docs/New_Security_Model.html>
 
L

Lasse Reichstein Nielsen

Daniel Kabs said:
I think, you should leave the "javascript:" from the onlick handler. It is
not needed or rather is is incorrect to place it there because only
javascript code is allowed, as I know.

I agree on leaving it out, but coincidentally, it is not incorrect to place
it there, merely irrelevant. The Javascript parser accepts it as a label
with the name "javascript", as can be seen by this test code:

onclick="javascript:while(true){ break javascript; }alert('Done');"

Since the label is not used, it is simply wasted space and download
size, and should be removed.

/L
 
M

Matt Kruse

Daniel said:
I don't see any problem, why it could be a security risk to allow
scripts to access data across domains.

XmlHttpRequests are sent just as if the user requested the URL, sending
cookies and everything.

I could write a page which invisibly requested pages from ebay, paypal, etc
where your stored cookie might allow you to skip the login process. I could
then parse the results and potentially gain access to private information.

Script should never be allowed to silently take actions on behalf of the
user, except in communication back to the same server that the user is
already accessing.
 
T

Thomas 'PointedEars' Lahn

Lasse said:
Daniel Kabs said:
I think, you should leave the "javascript:" from the onlick handler. It
is not needed or rather is is incorrect to place it there because only
javascript code is allowed, as I know.

I agree on leaving it out, but coincidentally, it is not incorrect to
place it there, merely irrelevant. The Javascript parser accepts it as a
label with the name "javascript", [...]

_JavaScript 1.2+_[1] parsers do so, other parsers will behave
differently, e.g., however proprietary and (thus?) incompatible
the approach, the Microsoft Script Engine will parse it as a
selection of the script language and of the script parser to
be used. Some parsers (particularly those implementing only
JavaScript 1.0 or 1.1) may even show a script error. An
ECMAScript 3 compliant parser will accept it as label only.


PointedEars
___________
[1] note the character case
 
D

Daniel Kabs

Hello!

Matt said:
XmlHttpRequests are sent just as if the user requested the URL, sending
cookies and everything.

Thanks for the information, I did not consider cookies and that stuff.
I could write a page which invisibly requested pages from ebay, paypal,
etc where your stored cookie might allow you to skip the login process. I
could then parse the results and potentially gain access to private
information.

Is that approach limited to scripts, only?

I could write a page that embedds an <img> tag and craft the src attribute
to contain the form values for bidding on a certain ebay auction. The
browser sends the HTTP request and the ebay server processes the GET
request as if the user requested the URL, isn't it? Of course, the user
will never see the resulting page but the request might be enough to
trigger an action to the users's disadvantage.

Cheers
Daniel
 
D

Daniel Kabs

Hallo!

Martin said:
It can surely be desirable and not be a security risk if a certain
script can access data from certain different servers but in general if
example.com for instance uses XML data and an XSLT stylesheet to output
its data to its users then it could be a security risk if a script from
the unrelated example.org is simply able to present the data on its own
site perhaps spoofing example.com.

This "security risk" is not so different from embedding pages using iframes.
If example.com page outputs HTML (which is more prevalent than XML/XSLT),
everybody can embedd example.com on their own pages using iframes.

I reckon, using a cunningly designed parent page you could add your own
submit buttons underneath the iframe and deceive the user to click them,
sending a request that's completely under the control of example.org.
The same origin policy in client-side scripting certainly for some
people prevents uses which are not doing any harm but if you want to be
able to prevent abuse then you have to make a compromise and that is in
this case a rather rigid policy as a browser can usually not decide
whether example.com and example.org would agree on data exchange between
the sites.

So the same-origin policy denies access to the iframe or scripts? I think,
that did not prevent anything as we have learned in the recent years
(phishing,...). The "bad guys" just copy the page and store it on their own
server.

The point is to divert the users to example.org in the first place. If you
manage that, no client-side same-origin policy can save you.
Mozilla for web services has tried to suggest a new model where any site
can publish a file which declares which hosts have what access but this
has not really find wide spread application as far as I know:
<http://lxr.mozilla.org/mozilla/source/extensions/webservices/docs/New_Security_Model.html>

If IE does not support it...


Cheers
Daniel
 
J

Jim Ley

Hallo!



This "security risk" is not so different from embedding pages using iframes.
If example.com page outputs HTML (which is more prevalent than XML/XSLT),
everybody can embedd example.com on their own pages using iframes.

no, this is completely wrong, the security risk is much, much higher,
these embedding or image src techniques are one way, you can't read
what you get back. This severely limits what you can do.
I reckon, using a cunningly designed parent page you could add your own
submit buttons underneath the iframe and deceive the user to click them,
sending a request that's completely under the control of example.org.

So it would submit, none of the fields would be sent from the field
the user entered in the other form.
So the same-origin policy denies access to the iframe or scripts? I think,
that did not prevent anything as we have learned in the recent years
(phishing,...). The "bad guys" just copy the page and store it on their own
server.

Yes, which is a completely different thing to xmlhttprequest. cross
domain xmlhttprequest would make phishing a very diffent activity, you
could just do live man in middle attacks from inside the users
browser, not a good idea!

Jim.
 
D

Daniel Kabs

Hello!

Jim said:
no, this is completely wrong, the security risk is much, much higher,
these embedding or image src techniques are one way, you can't read
what you get back. This severely limits what you can do.

I think, this is debatable. And before we continue to evaluate the level of
"security risk" one of these techniques generates, I suggest we should
clarify *what* the "bad guys" (got a better term for them?) want to do.

Example: If they succeed by just submitting information, then not being able
to read back doesn't help anything.
So it would submit, none of the fields would be sent from the field
the user entered in the other form.

I just wanted to give an example how you could submit forged data *on
behalf* of the user without using XmlHttpRequest. You are right, this way
you don't get any information from the user. So my anwser fits better to
the posting from Matt Kruse and does not belong here.

In fact, Martin was talking about "page spoofing". And that would be
realized not using iframes but creating a copy of the original page and
storing it on example.org (bad server).
Yes, which is a completely different thing to xmlhttprequest. cross
domain xmlhttprequest would make phishing a very diffent activity, you
could just do live man in middle attacks from inside the users
browser, not a good idea!

Sounds interesting, an "intra-browser" attack. :)

True, denying cross domain XmlHttpRequests raises the bar to gather user
information. But then, once you got the user on example.org or
http://www.ghoogle.com he is lost anyway.

Cheers
Daniel
 
T

Thomas 'PointedEars' Lahn

Daniel said:
Thanks for the information, I did not consider cookies and that stuff.


Is that approach limited to scripts, only?

No, it is not.
I could write a page that embedds an <img> tag and craft the src attribute
to contain the form values for bidding on a certain ebay auction.

You would not succeed this way, since they most certainly use sessions _and_
POST requests. The cookie and URL do not carry enough information, so the
(GET) request would be "rejected". However, with XMLHTTPRequest supported,
POST requests are possible.
The browser sends the HTTP request and the ebay server processes the GET
request as if the user requested the URL, isn't it?

If they would be this careless, eBay would not be what it is today.
Of course, the user will never see the resulting page but the request
might be enough to trigger an action to the users's disadvantage.

Yes, but not with the URL (of the GET request) alone.


PointedEars
 
D

Daniel Kabs

Thomas said:
You would not succeed this way, since they most certainly use sessions
_and_ POST requests.

That presumes, ebay is currently checking that bidding forms can only be
submitted using POST requests. I doubt this and I invite you to check this.
There is a nifty tool for Mozilla/FF called "Web Developer Tool Bar":
http://www.chrispederick.com/work/firefox/webdeveloper/
It disables Javascript with one click and turns POST requests into GET
requests. Do this conversion just before placing your bid.
If they would be this careless, eBay would not be what it is today.

I miss the <ironic mode='on'> tag here.

Are you saying that ebay is caring for the security of their customers? Do
you insinuate that ebay has always been carefully crafting their server
pages to achieve the highest security possible? Are you suggesting that
ebay protects their customers by fixing known bugs immediately?

I don't think so! They just got away with it. There have been numerous
reports how the security on ebay is flawed.

Cheers
Daniel Kabs
Germany
 
T

Thomas 'PointedEars' Lahn

Daniel said:
That presumes, ebay is currently checking that bidding forms can only be
submitted using POST requests.

Yes, indeed.
I doubt this
ACK

and I invite you to check this.

No. For two reasons:

1. You are the one that doubts, not me. I have only mentioned
a possibility that IMO has a quite high probability to be.

2. I do not have an eBay account and that is not going to change
in the foreseeable future.
There is a nifty tool for Mozilla/FF called "Web Developer Tool
Bar": http://www.chrispederick.com/work/firefox/webdeveloper/

Thanks, I have it already installed.
It disables Javascript with one click

What's that to do with the fact in question?
and turns POST requests into GET requests. Do this conversion just
before placing your bid.

Next one, please.
I miss the <ironic mode='on'> tag here.

Well, I don't.
Are you saying that ebay is caring for the security of their customers?

I do hope so.
Do you insinuate that ebay has always been carefully crafting their server
pages to achieve the highest security possible? Are you suggesting that
ebay protects their customers by fixing known bugs immediately?

Not "always" and "immediately", but "most of the time" and "with a
reasonable delay".
I don't think so! They just got away with it. There have been numerous
reports how the security on ebay is flawed.

Ahh -- but there are also reports that security holes on eBay have been
fixed. However, this is going straight OT. Please let us continue it
elsewhere, if ever.


PointedEars

P.S.: In private mails, you may also write in German.
P.P.S.: Don't use "(war: ...)" in subjects but "(was: ...)".
The latter form is also common and supported in de.ALL.
 
D

Daniel Kabs

Hello!
Not "always" and "immediately", but "most of the time" and "with a
reasonable delay".

How can you say that? Do you care? You don't even have an account on ebay!

To be on topic: regarding javascript security on ebay, I recommend you go to
http://www.wortfilter.de and enter "javascript" in the search form on the
top right corner of the page.

Cheers
Daniel
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,968
Messages
2,570,149
Members
46,695
Latest member
StanleyDri

Latest Threads

Top