Parsing an HTML a tag

G

George

How can I parse an HTML file and collect only that the A tags. I have a
start for the code but an unable to figure out how to finish the code.
HTML_parse gets the data from the URL document. Thanks for the help

def HTML_parse(data):
from HTMLParser import HTMLParser
parser = MyHTMLParser()

parser.feed(data)

class MyHTMLParser(HTMLParser):

def handle_starttag(self, tag, attrs):

def handle_endtag(self, tag):

def read_page(URL):
"this function returns the entire content of the specified URL
document"
import urllib
connect = urllib.urlopen(url)
data = connect.read()
connect.close()
return data
 
B

beza1e1

I do not really know, what you want to do. Getting he urls from the a
tags of a html file? I think the easiest method would be a regular
expression.
import urllib, sre
html = urllib.urlopen("http://www.google.com").read()
sre.findall('href="([^>]+)"', html)
['/imghp?hl=de&tab=wi&ie=UTF-8',
'http://groups.google.de/grphp?hl=de&tab=wg&ie=UTF-8',
'/dirhp?hl=de&tab=wd&ie=UTF-8',
'http://news.google.de/nwshp?hl=de&tab=wn&ie=UTF-8',
'http://froogle.google.de/frghp?hl=de&tab=wf&ie=UTF-8',
'/intl/de/options/']
sre.findall('href=[^>]+>([^<]+)</a>', html)
['Bilder', 'Groups', 'Verzeichnis', 'News', 'Froogle',
'Mehr&nbsp;&raquo;', 'Erweiterte Suche', 'Einstellungen',
'Sprachtools', 'Werbung', 'Unternehmensangebote', 'Alles \xfcber
Google', 'Google.com in English']

Google has some strange html, href without quotation marks: <a
href=http://www.google.com/ncr>Google.com in English</a>
 
M

Mike Meyer

beza1e1 said:
I do not really know, what you want to do. Getting he urls from the a
tags of a html file? I think the easiest method would be a regular
expression.

I think this ranks as #2 on the list of "difficult one-day
hacks". Yeah, it's simple to write an RE that works most of the
time. It's a major PITA to write one that works in all the legal
cases. Getting one that also handles all the cases seen in the wild is
damn near impossible.
import urllib, sre
html = urllib.urlopen("http://www.google.com").read()
sre.findall('href="([^>]+)"', html)

This fails in a number of cases. Whitespace around the "=" sign for
attibutes. Quotes around other attributes in the tag (required by
XHTML). '>' in the URL (legal, but disrecommended). Attributes quoted
with single quotes instead of double quotes, or just unqouted. It
misses IMG SRC attributes. It hands back relative URLs as such,
instead of resolving them to the absolute URL (which requires checking
for the base URL in the HEAD), which may or may not be acceptable.
Google has some strange html, href without quotation marks: <a
href=http://www.google.com/ncr>Google.com in English</a>

That's not strange. That's just a bit unusual. Perfectly legal, though
- any browser (or other html processor) that fails to handle it is
broken.

<mike
 
B

beza1e1

I think for a quick hack, this is as good as a parser. A simple parser
would miss some cases as well. RE are nearly not extendable though, so
your critic is valid.

The point is, what George wants to do. A mixture would be possible as
well:
Getting all <a ...> by a RE and then extracting the url with something
like a parser.
 
M

Mike Meyer

beza1e1 said:
I think for a quick hack, this is as good as a parser. A simple parser
would miss some cases as well. RE are nearly not extendable though, so
your critic is valid.

Pretty much any first attempt is going to miss some cases. There
libraries available that are have stood the test of time. Simply
usinng one of those is the right solution.
The point is, what George wants to do. A mixture would be possible as
well:
Getting all <a ...> by a RE and then extracting the url with something
like a parser.

I thought the point was to extract all URLs? Those appear in
attributes of tags other than A tags. While that's a meta-problem that
requires properly configuring the parser to deal with, it's something
that's *much* simpler to do if you've got a parser that understands
the structure of HTML - you should be able to specify tag/attribute
pairs to look for - than with something that is treating it as
unstructured text.

<mike
 
L

Leo Jay

you may define a start_a in MyHTMLParser.

e.g.
import htmllib
import formatter

class HTML_Parser(htmllib.HTMLParser):
def __init__(self):
htmllib.HTMLParser.__init__(self,
formatter.AbstractFormatter(formatter.NullWriter()))

def start_a(self, args):
for key, value in args:
if key.lower() == 'href':
print value


html = HTML_Parser()
html.feed(open(r'a.htm','r').read())
html.close()
 
T

Thorsten Kampe

* George (2005-09-24 18:13 +0100)
How can I parse an HTML file and collect only that the A tags.

import formatter, \
htmllib, \
urllib

url = 'http://python.org'

htmlp = htmllib.HTMLParser(formatter.NullFormatter())
htmlp.feed(urllib.urlopen(url).read())
htmlp.close()

print htmlp.anchorlist
 
G

George

I'm very new to python and I have tried to read the tutorials but I am
unable to understand exactly how I must do this problem.

Specifically, the showIPnums function takes a URL as input, calls the
read_page(url) function to obtain the entire page for that URL, and
then lists, in sorted order, the IP addresses implied in the "<A
HREF=· · ·>" tags within that page.


"""
Module to print IP addresses of tags in web file containing HTML
['0.0.0.0', '128.255.44.134', '128.255.45.54']
['0.0.0.0', '128.255.135.49', '128.255.244.57', '128.255.30.11',
'128.255.34.132', '128.255.44.51', '128.255.45.53',
'128.255.45.54', '129.255.241.42', '64.202.167.129']

"""

def read_page(url):
import formatter
import htmllib
import urllib

htmlp = htmllib.HTMLParser(formatter.NullFormatter())
htmlp.feed(urllib.urlopen(url).read())
htmlp.close()

def showIPnums(URL):
page=read_page(URL)

if __name__ == '__main__':
import doctest, sys
doctest.testmod(sys.modules[__name__])
 
G

George Sakkis

George said:
I'm very new to python and I have tried to read the tutorials but I am
unable to understand exactly how I must do this problem.

Specifically, the showIPnums function takes a URL as input, calls the
read_page(url) function to obtain the entire page for that URL, and
then lists, in sorted order, the IP addresses implied in the "<A
HREF=· · ·>" tags within that page.


"""
Module to print IP addresses of tags in web file containing HTML
['0.0.0.0', '128.255.44.134', '128.255.45.54']
['0.0.0.0', '128.255.135.49', '128.255.244.57', '128.255.30.11',
'128.255.34.132', '128.255.44.51', '128.255.45.53',
'128.255.45.54', '129.255.241.42', '64.202.167.129']

"""

def read_page(url):
import formatter
import htmllib
import urllib

htmlp = htmllib.HTMLParser(formatter.NullFormatter())
htmlp.feed(urllib.urlopen(url).read())
htmlp.close()

def showIPnums(URL):
page=read_page(URL)

if __name__ == '__main__':
import doctest, sys
doctest.testmod(sys.modules[__name__])


You forgot to mention that you don't want duplicates in the result. Here's a function that passes
the doctest:

from urllib import urlopen
from urlparse import urlsplit
from socket import gethostbyname
from BeautifulSoup import BeautifulSoup

def showIPnums(url):
"""Return the unique IPs found in the anchors of the webpage at the given
url.
['0.0.0.0', '128.255.135.49', '128.255.244.57', '128.255.30.11', '128.255.34.132',
'128.255.44.51', '128.255.45.53', '128.255.45.54', '129.255.241.42', '64.202.167.129']
"""
hrefs = set()
for link in BeautifulSoup(urlopen(url)).fetch('a'):
try: hrefs.add(gethostbyname(urlsplit(link["href"])[1]))
except: pass
return sorted(hrefs)


HTH,
George
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,982
Messages
2,570,185
Members
46,736
Latest member
AdolphBig6

Latest Threads

Top