Hello,
Running some tests with a list of data I came to realize that when the
data was in XML and grater than 100 items of data, the browser
processes the data faster than if the data was on JSON.
I sort of surprised me.. .I expected the opossite.
I expect that depends very much on what you are trying to do and how
you are doing it. If you are using say XPath to get certain XML
elements, the equivalent object access will likely require walking
down an object structure to find equivalent elements.
The XML document likely looks very different to the JSON structure,
e.g. consider the following XML:
<foo name="root" version="1.2">
<bar name="bar0"/>
<bar name="bar1"/>
<fred name="fred0">
<bar name="bar2"/>
<bar name="bar3"/>
</fred>
</foo>
An equivalent JSON object might be:
var dataObj = {
foo0: {
nodeType: 'foo',
name: 'root',
version: '1.2',
childNodes: {
bar0: {
nodeType: 'bar',
name: 'bar0'
},
bar1: {
nodeType: 'bar',
name: 'bar0'
},
fred0: {
name: 'fred0',
childNodes: {
bar2: {
nodeType: 'bar',
name: 'bar2'
},
bar3: {
nodeType: 'bar',
name: 'bar3'
}
}
}
}
}
};
An equivalent to "getElementsByTagName" is:
function getNodesByType(obj, propertyName, store) {
var t;
store = store || [];
for (var p in obj) {
t = obj[p]
if (t.nodeType == propertyName) {
store.push(t);
}
if (typeof t == 'object') {
getNodesByType(t, propertyName, store);
}
}
return store;
}
To get all bar elements:
var allBars = getNodesByType(dataObj, 'bar');
Presumably if you are using XML you can use getElementsByTagName or
XPath, I would expect them to be faster than "object walking" for
large documents. But if I was using a large, complex object I might
also create an index of frequently accessed elements so I don't need
to walk the structure every time.
I might also create object mutation methods to keep those indexes
current ("live") so that adding, moving or deleting elements also
maintains relevant indexes.
My assumption is that I am using activex to process the xml while the
json needs to be interpreted by the browser thus the activex is
running like a normal program while the json is on top of the vm the
browser creates for javascript.
Since you have not shown any code, any such explanation is just
opinion[1]. It is rare that a useful conclusion can be drawn from an
unsubstantiated assumption.
Not all browsers use ActiveX to process XML, therefore it is not
necessarily a factor.
JSON is a data transport mechanism, as is XML. It is not inherently
slower, it may be for a particular case if it is more verbose
(requires more bits to be transferred) than the equivalent XML.
Methods to deal with JSON are native to the browser and likely also to
the platform and are therefore (from a code optimisation and
prioritisation perspective) roughly equivalent to the code that
processes XML. I don't see that one is necessarily any slower or
faster than the other.
Built-in methods for processing XML may not be available for JSON
where the JSON is structured as if it is XML (as above). Therefore
native methods must be used for JSON which may be slower for being
native rather than built-in. But it may also be possible to modify the
structure of the JSON to take better advantage of the built-in methods
that are available, rather than trying to use JSON like XML. It might
also be discovered that trying to structure XML to match an optimised
JSON structure reverses the perceived performance.
But that is all conjecture.
1. "Opinions are like armpits - everyone has at least one and there is
nothing special about them." -- Dr Karl Kruszelnicki