You can start multiple spider instances that share a single redis queue. Best suitable for broad multi-domain crawls. Scraped items gets pushed into a redis queued meaning that you can start as many as needed post-processing processes sharing the items queue. Scheduler + Duplication Filter, Item Pipeline, Base Spiders. Default requests serializer is pickle, but it can be changed to any module with loads and dumps functions. Note that pickle is not compatible between python versions. Version 0.3 changed the requests serialization from marshal to cPickle, therefore persisted requests using version 0.2 will not able to work on 0.3. The class scrapy_redis.spiders.RedisSpider enables a spider to read the urls from redis. The urls in the redis queue will be processed one after another, if the first request yields more requests, the spider will process those requests before fetching another url from redis.

Features

  • Distributed crawling/scraping
  • Distributed post-processing
  • Scrapy plug-and-play components
  • Python 2.7, 3.4 or 3.5 required
  • Redis >= 2.8 required
  • Scheduler + Duplication Filter, Item Pipeline, Base Spiders

Project Samples

Project Activity

See All Activity >

License

MIT License

Follow Scrapy-Redis

Scrapy-Redis Web Site

Other Useful Business Software
Outgrown Windows Task Scheduler? Icon
Outgrown Windows Task Scheduler?

Free diagnostic identifies where your workflow is breaking down—with instant analysis of your scheduling environment.

Windows Task Scheduler wasn't built for complex, cross-platform automation. Get a free diagnostic that shows exactly where things are failing and provides remediation recommendations. Interactive HTML report delivered in minutes.
Download Free Tool
Rate This Project
Login To Rate This Project

User Reviews

Be the first to post a review of Scrapy-Redis!

Additional Project Details

Programming Language

Python

Related Categories

Python Browsers, Python Web Scrapers

Registered

2021-11-09