Release: | 0.0.3.dev |
---|---|
Date: | April 27, 2014 |
spyda is a simple tool and library written in the Python Programming Language to crawl a given url whilst allowing you to restrict results to a specified domain and optionally also perform pattern matching against URLs crawled. spyda will report on any URLs it was unable to crawl along with their status code and store successfully crawled links and their content in a directory structure that matches the domain and URLs searched.
spyda was developed at Griffith University as a tool and library to assist with web crawling tasks and data extraction and has been used to help match researcher names against publications as well as extract data and links from external sources of data.
spyda also comes basic documentation and a full comprehensive unit test suite which require the following:
To build the docs:
To run the unit tests:
The simplest and recommended way to install spyda is with pip. You may install the latest stable release from PyPI with pip:
> pip install spyda
If you do not have pip, you may use easy_install:
> easy_install spyda
Alternatively, you may download the source package from the PyPI Page or the Downloads page on the Project Website; extract it and install using:
> python setup.py install
You can also install the latest-development version by using pip or easy_install:
> pip install spyda==dev
or:
> easy_install spyda==dev
For further information see the spyda documentation.
Spyda - Python Spider Tool and Library
spyda is a set of tools and a library written in the Python Programming Language for web crawling, article extraction entity matching and rdf graog geberatuib.
copyright: | CopyRight (C) 2012-2013 by James Mills |
---|
Crawler
Crawl a given url recursively for urls.
Parameters: |
|
---|---|
Returns: | A dict in the form: {“error”: set(...), “urls”: set(...)} The errors set contains 2-item tuples of (status, url) The urls set contains 2-item tuples of (rel_url,abs_url) |
Return type: | dict |
In verbose mode the following single-character letters are used to denonate meaning for URLs being processed:
- (I)nvalid URL
- Did not match allowed (C)ontent Type(s).
- (F)ound a valid URL
- (S)een this URL before
- (E)rror fetching URL
- Did not match supplied (P)attern(s).
- URL already (V)isitied
- URL blacklisted
- URL whitelisted
Also in verbose mode each followed URL is printed in the form: <status> <reason> <type> <length> <link> <url>
Web Extraction Tool
Entity Matching Tool
Utilities
Removes HTML or XML character references and entities from a text string.
Parameters: | text – The HTML (or XML) source text. |
---|---|
Returns: | The plain text, as a Unicode string, if necessary. |
Convert some common unicode characters to their plain text equivilent.
This includes for example left and right double quotes, left and right single quotes, etc.
Use SequenceMatcher to return list of close matches.
word is a sequence for which close matches are desired (typically a string).
possibilities is a list of sequences against which to match word (typically a list of strings).
Optional arg n (default 3) is the maximum number of close matches to return. n must be > 0.
Optional arg cutoff (default 0.6) is a float in [0.0, 1.0]. Possibilities that don’t score at least that similar to word are ignored.
The best (no more than n) matches among the possibilities are returned in a list, sorted by similarity score, most similar first.