Python自定义scrapy中间模块避免重复采集的方法
发布时间:2020-12-16 19:58:49 所属栏目:Python 来源:网络整理
导读:本篇章节讲解Python自定义scrapy中间模块避免重复采集的方法。供大家参考研究。具体如下: from scrapy import logfrom scrapy.http import Requestfrom scrapy.item import BaseItemfrom scrapy.utils.request import request_fingerprintfrom mypr
|
本篇章节讲解Python自定义scrapy中间模块避免重复采集的方法。分享给大家供大家参考。具体如下:
from scrapy import log
from scrapy.http import Request
from scrapy.item import BaseItem
from scrapy.utils.request import request_fingerprint
from myproject.items import MyItem
class IgnoreVisitedItems(object):
"""Middleware to ignore re-visiting item pages if they
were already visited before.
The requests to be filtered by have a meta['filter_visited']
flag enabled and optionally define an id to use
for identifying them,which defaults the request fingerprint,although you'd want to use the item id,if you already have it beforehand to make it more robust.
"""
FILTER_VISITED = 'filter_visited'
VISITED_ID = 'visited_id'
CONTEXT_KEY = 'visited_ids'
def process_spider_output(self,response,result,spider):
context = getattr(spider,'context',{})
visited_ids = context.setdefault(self.CONTEXT_KEY,{})
ret = []
for x in result:
visited = False
if isinstance(x,Request):
if self.FILTER_VISITED in x.meta:
visit_id = self._visited_id(x)
if visit_id in visited_ids:
log.msg("Ignoring already visited: %s" % x.url,level=log.INFO,spider=spider)
visited = True
elif isinstance(x,BaseItem):
visit_id = self._visited_id(response.request)
if visit_id:
visited_ids[visit_id] = True
x['visit_id'] = visit_id
x['visit_status'] = 'new'
if visited:
ret.append(MyItem(visit_id=visit_id,visit_status='old'))
else:
ret.append(x)
return ret
def _visited_id(self,request):
return request.meta.get(self.VISITED_ID) or request_fingerprint(request)
希望本文所述对大家的Python程序设计有所帮助。 (编辑:李大同) 【声明】本站内容均来自网络,其相关言论仅代表作者个人观点,不代表本站立场。若无意侵犯到您的权利,请及时与联系站长删除相关内容! |
