Java + DOM + extracting text from XHTML

D

Damo

I have a program, That retrieves a webpage , such as a search engine
results page from the web, Then I need to go through the document and
retrieve just the search results. The problem is I want to omit the
sponsored results. So is there a way I can start analysing the
document at a specific point (ie after the sponsored results)
Thanks
 
J

Joseph Kesselman

Damo said:
I have a program, That retrieves a webpage , such as a search engine
results page from the web, Then I need to go through the document and
retrieve just the search results. The problem is I want to omit the
sponsored results. So is there a way I can start analysing the
document at a specific point (ie after the sponsored results)

If you can describe what that specific point is, you should be able to
write code that navigates through the document to get to that point, and
then do whatever processing you need. If you want a more detailed
answer, you'll have to ask a more specific question...
 
P

Peter Flynn

Damo said:
I have a program, That retrieves a webpage , such as a search engine
results page from the web, Then I need to go through the document and
retrieve just the search results. The problem is I want to omit the
sponsored results. So is there a way I can start analysing the
document at a specific point (ie after the sponsored results)

Pass the page through HTML Tidy so that it becomes well-formed XHTML.
Inspect it manually in an XML editor and find where the bits you want to
keep start. Use some kind of XPath inspector (standalone or built into
your editor) to derive a reliably unique XPath statement that will
return the bits you want. Write some XSLT code to implement this and
transform the result to the kind of output you want. Once you've done
this once, you can automate it and it will continue until some
dipsh^H^H^H^H^Hbright spark the far end decides to change the HTML of
the original page, at which point it will all break down and you'll have
to repeat the process. Organisations who don't like their pages being
sucked clean of information sometimes have random variations built into
their server to make it unpredictable where the information will appear
in the document structure, even though it appears unchanged in the
browser. Good luck :)

Alternative: ask them if they provide an XML web service API which you
can use (probably for payment) to deliver the information in a
display-neutral form.

Warning: vacuuming pages from a web site too frequently will get your IP
address-block blacklisted.

///Peter
 
J

Joseph Kesselman

Peter said:
Pass the page through HTML Tidy so that it becomes well-formed XHTML.

Or use an HTML-to-XML-API parser, such as NekoHTML (part of the Xerces
family). Though Tidy has the advantage that, like a parser, it will
attempt to guess what completely bogus/broken/atrocious HTML was
intended to mean; I think NekoHTML is intended mostly for HTML that is
at least vaguely reasonable.

As Damo said, and as I hinted: If you're doing this as a personal tool,
and are willing to continue to maintain it every time the folks running
the search engine break it, you can probably make this work well enough.
If you're doing it as a business tool, whoever runs that search engine
is going to work very hard to shut you down unless you've contracted
with them -- and if you've got a contract, you can probably pay them for
an XML interface for the search that doesn't include the advertising,
avoiding the whole problem.

Remember, search results are their product. They're putting a lot of
money into the software, machines, and network resources, and they're
providing the service to noncommercial users for no fee. They really are
entitled to make a a fair profit on the commercial users and/or those
who aren't willing to look at advertising.
 
J

Joseph Kesselman

Peter said:
Pass the page through HTML Tidy so that it becomes well-formed XHTML.

Or use an HTML-to-XML-API parser, such as NekoHTML (part of the Xerces
family). Though Tidy has the advantage that, like a parser, it will
attempt to guess what completely bogus/broken/atrocious HTML was
intended to mean; I think NekoHTML is intended mostly for HTML that is
at least vaguely reasonable.

As Peter said, and as I hinted: If you're doing this as a personal tool,
and are willing to continue to maintain it every time the folks running
the search engine break it, you can probably make this work well enough.
If you're doing it as a business tool, whoever runs that search engine
is going to work very hard to shut you down unless you've contracted
with them -- and if you've got a contract, you can probably pay them for
an XML interface for the search that doesn't include the advertising,
avoiding the whole problem.

Remember, search results are their product. They're putting a lot of
money into the software, machines, and network resources, and they're
providing the service to noncommercial users for no fee. They really are
entitled to make a a fair profit on the commercial users and/or those
who aren't willing to look at advertising.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
474,007
Messages
2,570,266
Members
46,865
Latest member
AveryHamme

Latest Threads

Top