spyda Package

spyda Package

Spyda - Python Spider Tool and Library

spyda is a set of tools and a library written in the Python Programming Language for web crawling, article extraction entity matching and rdf graog geberatuib.

copyright:CopyRight (C) 2012-2013 by James Mills

crawler Module

Crawler

spyda.crawler.crawl(root_url, blacklist=None, content_types=['text/html', 'text/xml'], max_depth=0, patterns=None, verbose=False, whitelist=None)

Crawl a given url recursively for urls.

Parameters:
  • root_url (str) – Root URL to start crawling from.
  • blacklist (list or None) – A list of blacklisted urls (matched by regex) to not traverse.
  • content_types (list or CONTENT_TYPES) – A list of allowable content types to follow.
  • max_depth – Maximum depth to follow, 0 for unlimited depth.
  • max_depth – int
  • patterns (list or None or False) – A list of regex patterns to match urls against. If evaluates to False, matches all urls.
  • verbose – If True will print verbose logging
  • verbose – bool
  • whitelist (list or None) – A list of whitelisted urls (matched by regex) to traverse.
Returns:

A dict in the form: {“error”: set(...), “urls”: set(...)} The errors set contains 2-item tuples of (status, url) The urls set contains 2-item tuples of (rel_url,abs_url)

Return type:

dict

In verbose mode the following single-character letters are used to denonate meaning for URLs being processed:

    1. (I)nvalid URL
    1. Did not match allowed (C)ontent Type(s).
    1. (F)ound a valid URL
    1. (S)een this URL before
    1. (E)rror fetching URL
    1. Did not match supplied (P)attern(s).
    1. URL already (V)isitied
    1. URL blacklisted
    1. URL whitelisted

Also in verbose mode each followed URL is printed in the form: <status> <reason> <type> <length> <link> <url>

spyda.crawler.parse_options()
spyda.crawler.main()

extractor Module

Web Extraction Tool

spyda.extractor.calais_options(parser)
spyda.extractor.parse_options()
spyda.extractor.extract(source, filters)
spyda.extractor.job(opts, source)
spyda.extractor.main()

matcher Module

Entity Matching Tool

spyda.matcher.parse_options()
spyda.matcher.build_datasets(opts, source)
spyda.matcher.job(opts, datasets, source)
spyda.matcher.main()

processors Module

utils Module

Utilities

spyda.utils.is_url(s)
spyda.utils.dict_to_text(d)
spyda.utils.unescape(text)

Removes HTML or XML character references and entities from a text string.

Parameters:text – The HTML (or XML) source text.
Returns:The plain text, as a Unicode string, if necessary.
spyda.utils.unichar_to_text(text)

Convert some common unicode characters to their plain text equivilent.

This includes for example left and right double quotes, left and right single quotes, etc.

spyda.utils.get_close_matches(word, possibilities, n=3, cutoff=0.6)

Use SequenceMatcher to return list of close matches.

word is a sequence for which close matches are desired (typically a string).

possibilities is a list of sequences against which to match word (typically a list of strings).

Optional arg n (default 3) is the maximum number of close matches to return. n must be > 0.

Optional arg cutoff (default 0.6) is a float in [0.0, 1.0]. Possibilities that don’t score at least that similar to word are ignored.

The best (no more than n) matches among the possibilities are returned in a list, sorted by similarity score, most similar first.

spyda.utils.fetch_url(url)
spyda.utils.log(msg, *args, **kwargs)
spyda.utils.error(e)
spyda.utils.status(msg, *args)
spyda.utils.parse_html(html)
spyda.utils.doc_to_text(doc)

version Module

Version Module

So we only have to maintain version information in one place!

Table Of Contents

Previous topic

API Documentation

Next topic

TODO

This Page