Welcome, guest | Sign In | My Account | Store | Cart

NOTE: This recipe has been updated with suggested improvements since the last revision.

This is a simple web crawler I wrote to test websites and links. It will traverse all links found to any given depth.

See --help for usage.

I'm posting this recipe as this kind of problem has been asked on the Python Mailing List a number of times... I thought I'd share my simple little implementation based on the standard library and BeautifulSoup.

--JamesMills

Python, 190 lines
  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
#!/usr/bin/env python

"""Web Crawler/Spider

This module implements a web crawler. This is very _basic_ only
and needs to be extended to do anything usefull with the
traversed pages.
"""

import re
import sys
import time
import math
import urllib2
import urlparse
import optparse
from cgi import escape
from traceback import format_exc
from Queue import Queue, Empty as QueueEmpty

from BeautifulSoup import BeautifulSoup

__version__ = "0.2"
__copyright__ = "CopyRight (C) 2008-2011 by James Mills"
__license__ = "MIT"
__author__ = "James Mills"
__author_email__ = "James Mills, James dot Mills st dotred dot com dot au"

USAGE = "%prog [options] <url>"
VERSION = "%prog v" + __version__

AGENT = "%s/%s" % (__name__, __version__)

class Crawler(object):

    def __init__(self, root, depth, locked=True):
        self.root = root
        self.depth = depth
        self.locked = locked
        self.host = urlparse.urlparse(root)[1]
        self.urls = []
        self.links = 0
        self.followed = 0

    def crawl(self):
        page = Fetcher(self.root)
        page.fetch()
        q = Queue()
        for url in page.urls:
            q.put(url)
        followed = [self.root]

        n = 0

        while True:
            try:
                url = q.get()
            except QueueEmpty:
                break

            n += 1

            if url not in followed:
                try:
                    host = urlparse.urlparse(url)[1]
                    if self.locked and re.match(".*%s" % self.host, host):
                        followed.append(url)
                        self.followed += 1
                        page = Fetcher(url)
                        page.fetch()
                        for i, url in enumerate(page):
                            if url not in self.urls:
                                self.links += 1
                                q.put(url)
                                self.urls.append(url)
                        if n > self.depth and self.depth > 0:
                            break
                except Exception, e:
                    print "ERROR: Can't process url '%s' (%s)" % (url, e)
                    print format_exc()

class Fetcher(object):

    def __init__(self, url):
        self.url = url
        self.urls = []

    def __getitem__(self, x):
        return self.urls[x]

    def _addHeaders(self, request):
        request.add_header("User-Agent", AGENT)

    def open(self):
        url = self.url
        try:
            request = urllib2.Request(url)
            handle = urllib2.build_opener()
        except IOError:
            return None
        return (request, handle)

    def fetch(self):
        request, handle = self.open()
        self._addHeaders(request)
        if handle:
            try:
                content = unicode(handle.open(request).read(), "utf-8",
                        errors="replace")
                soup = BeautifulSoup(content)
                tags = soup('a')
            except urllib2.HTTPError, error:
                if error.code == 404:
                    print >> sys.stderr, "ERROR: %s -> %s" % (error, error.url)
                else:
                    print >> sys.stderr, "ERROR: %s" % error
                tags = []
            except urllib2.URLError, error:
                print >> sys.stderr, "ERROR: %s" % error
                tags = []
            for tag in tags:
                href = tag.get("href")
                if href is not None:
                    url = urlparse.urljoin(self.url, escape(href))
                    if url not in self:
                        self.urls.append(url)

def getLinks(url):
    page = Fetcher(url)
    page.fetch()
    for i, url in enumerate(page):
        print "%d. %s" % (i, url)

def parse_options():
    """parse_options() -> opts, args

    Parse any command-line options given returning both
    the parsed options and arguments.
    """

    parser = optparse.OptionParser(usage=USAGE, version=VERSION)

    parser.add_option("-q", "--quiet",
            action="store_true", default=False, dest="quiet",
            help="Enable quiet mode")

    parser.add_option("-l", "--links",
            action="store_true", default=False, dest="links",
            help="Get links for specified url only")

    parser.add_option("-d", "--depth",
            action="store", type="int", default=30, dest="depth",
            help="Maximum depth to traverse")

    opts, args = parser.parse_args()

    if len(args) < 1:
        parser.print_help()
        raise SystemExit, 1

    return opts, args

def main():
    opts, args = parse_options()

    url = args[0]

    if opts.links:
        getLinks(url)
        raise SystemExit, 0

    depth = opts.depth

    sTime = time.time()

    print "Crawling %s (Max Depth: %d)" % (url, depth)
    crawler = Crawler(url, depth)
    crawler.crawl()
    print "\n".join(crawler.urls)

    eTime = time.time()
    tTime = eTime - sTime

    print "Found:    %d" % crawler.links
    print "Followed: %d" % crawler.followed
    print "Stats:    (%d/s after %0.2fs)" % (
            int(math.ceil(float(crawler.links) / tTime)), tTime)

if __name__ == "__main__":
    main()

17 comments

sebastien.renard 15 years, 4 months ago  # | flag

Hello,

Why don't you use the standard Python Queue module ?

http://www.python.org/doc/2.5.2/lib/module-Queue.html

Aaron Gallagher 15 years, 4 months ago  # | flag

There's also cgi.escape instead of your encodeHTML function.

desarollar enimsay 14 years, 8 months ago  # | flag

Hello can you give me more details on SIMPLE WEB CRAWLER I should use but I did not understand how it works? Thank you

Martin Zimmermann 14 years, 5 months ago  # | flag

This recipe is not working with the current BeautifulSoup module: google cache

You need to remove line 165 and replace 'soup.feed(content)' with 'soup = BeautifulSoup(content)' in line 168.

Really nice recipe! Thanks.

Jürgen Hermann 14 years, 2 months ago  # | flag

Regarding Aaron's comment -- actually, you want to use urllib.quote here. Both encodeHTML and cgi.escape are not correct for encoding URL paths.

a 13 years, 5 months ago  # | flag

self._queue[(len(self._queue) - (n + 1))] can be written as self._queue[-(n + 1)]

James Mills (author) 13 years, 2 months ago  # | flag

Interesting how you go back to a recipe you wrote over 2 years ago only to find it's been viewed over 16,000 times and has 6 commnets that I never replied to nor read :(

Shame on ActiveState for not having a mechanism for emailing comment notifications to the original author! Grrr :)

In any case -- I'll update this recipe with the suggestions and improvements - Thank you all! (Obviously things have changed in 2 years!)

cheers James

roger matelot 12 years, 12 months ago  # | flag

Hi James,

I've made a couple of changes to your crawler for my use. If you want to incorporate any or all of them, my fork is here: https://github.com/ewa/python-webcrawler/

-Eric

James Mills (author) 12 years, 12 months ago  # | flag

Hi Eric, Thanks! Glad to see others finding this simple implementation useful!

cheers James

Suresh 11 years ago  # | flag

Would u please upload a video or anything else explaining the execution..... it would be helpful

James Mills (author) 11 years ago  # | flag

@Suresh: What are you not sure of?

For a better implementation based on this recipe please see my spyda tools and library which does:

  • Web Crawling
  • Article Extraction via CSS secltors
  • Optional OpenCalais processing
  • Entity Matching

--JamesMills / prologic

Suresh 11 years ago  # | flag

Thanks a lot for such a quick reply.... i'm actually not sure about the control flow of the code... how the passing of values takes place....

and what are spyda tools?? i'm new to python actually so it's really difficult for me to understand how the code works.....

Thank's for your help in advance

James Mills (author) 11 years ago  # | flag

My recommendation is to do some reading, tutorials, documentation, etc and have a play. This isn't the forum for teaching Python. Take this recipe as you will. It's a web crawler. It's a simple 2-class system with a single-threaded loop fetching each url, parsing it and collecting links. Not very complicated.

The spyda tools I mentioned are better written.

--JamesMills / prologic

Suresh 11 years ago  # | flag

I would definitely do that... Anyway thanks a lot for the code, I would work on it and try to implement it. I might disturb u again if i get stuck

James Mills (author) 11 years ago  # | flag

I'm sorry. As I said. This is not a forum for teaching and learning Python. Please seek help from the following resources:

Suresh 11 years ago  # | flag

oh..never mind, i don't take things so hard.

Actually i'm in a rush here.I'm a final year student doing my bachelor's and i have to show a working web crawler by next week to my mentor So i'm in desperate need of any help i can get.

James Mills (author) 11 years ago  # | flag

I forgot to post a link to spyda:

https://bitbucket.org/prologic/spyda

$ crawl --help
$ extract --help
$ match --help

--JamesMills / prologic