11.CrawlSpiders
CrawlSpiders通过下面的命令可以快速创建 CrawlSpider模板 的代码: 1.scrapy startproject tencentspider
? ? 2.scrapy genspider -t crawl tencent tencent.com
? 上一个案例中,我们通过正则表达式,制作了新的url作为Request请求参数,现在我们可以换个花样...
它是Spider的派生类,Spider类的设计原则是只爬取start_url列表中的网页,而CrawlSpider类定义了一些规则(rule)来提供跟进link的方便的机制,从爬取的网页中获取link并继续爬取的工作更适合。 ? 1.爬取腾讯 tencent.py
#!/usr/bin/env python # -*- coding:utf-8 -*- import scrapy # 导入CrawlSpider类和Rule from scrapy.spiders import CrawlSpider,Rule # 导入链接规则匹配类,用来提取符合规则的连接 from scrapy.linkextractors import LinkExtractor from TencentSpider.items import TencentItem class TencentSpider(CrawlSpider): name = "tencent" allow_domains = ["hr.tencent.com"] start_urls = ["http://hr.tencent.com/position.php?&start=0#a"] # Response里链接的提取规则,返回的符合匹配规则的链接匹配对象的列表 pagelink = LinkExtractor(allow=("start=d+")) rules = [ # 获取这个列表里的链接,依次发送请求,并且继续跟进,调用指定回调函数处理 Rule(pagelink,callback = "parseTencent",follow = True) ] # 指定的回调函数 def parseTencent(self,response): #evenlist = response.xpath("//tr[@class=‘even‘] | //tr[@class=‘odd‘]") #oddlist = response.xpath("//tr[@class=‘even‘] | //tr[@class=‘odd‘]") #fulllist = evenlist + oddlist #for each in fulllist: for each in response.xpath("//tr[@class=‘even‘] | //tr[@class=‘odd‘]"): item = TencentItem() # 职位名称 item[‘positionname‘] = each.xpath("./td[1]/a/text()").extract()[0] # 详情连接 item[‘positionlink‘] = each.xpath("./td[1]/a/@href").extract()[0] # 职位类别 item[‘positionType‘] = each.xpath("./td[2]/text()").extract()[0] # 招聘人数 item[‘peopleNum‘] = each.xpath("./td[3]/text()").extract()[0] # 工作地点 item[‘workLocation‘] = each.xpath("./td[4]/text()").extract()[0] # 发布时间 item[‘publishTime‘] = each.xpath("./td[5]/text()").extract()[0] yield item ? CrawlSpider继承于Spider类,除了继承过来的属性外(name、allow_domains),还提供了新的属性和方法: LinkExtractorsclass scrapy.linkextractors.LinkExtractor Link Extractors 的目的很简单: 提取链接? 每个LinkExtractor有唯一的公共方法是 extract_links(),它接收一个 Response 对象,并返回一个 scrapy.link.Link 对象。 Link Extractors要实例化一次,并且 extract_links 方法会根据不同的 response 调用多次提取链接? ? class scrapy.linkextractors.LinkExtractor( allow = (),deny = (),allow_domains = (),deny_domains = (),deny_extensions = None,restrict_xpaths = (),tags = (‘a‘,‘area‘),attrs = (‘href‘),canonicalize = True,unique = True,process_value = None ) ? 主要参数:
rules在rules中包含一个或多个Rule对象,每个Rule对爬取网站的动作定义了特定操作。如果多个rule匹配了相同的链接,则根据规则在本集合中被定义的顺序,第一个会被使用。 class scrapy.spiders.Rule( link_extractor,callback = None,cb_kwargs = None,follow = None,process_links = None,process_request = None ) ?
爬取规则(Crawling rules)继续用腾讯招聘为例,给出配合rule使用CrawlSpider的例子:
? CrawlSpider 版本? 那么,scrapy shell测试完成之后,修改以下代码 ? #提取匹配 ‘http://hr.tencent.com/position.php?&start=d+‘的链接 page_lx = LinkExtractor(allow = (‘start=d+‘)) rules = [ #提取匹配,并使用spider的parse方法进行分析;并跟进链接(没有callback意味着follow默认为True) Rule(page_lx,callback = ‘parse‘,follow = True) ] ? ? 这么写对吗? ? 不对!千万记住 callback 千万不能写 parse,再次强调:由于CrawlSpider使用parse方法来实现其逻辑,如果覆盖了 parse方法,crawl spider将会运行失败。 ? 运行: scrapy crawl tencent
? ? LoggingScrapy提供了log功能,可以通过 logging 模块使用。 可以修改配置文件settings.py,任意位置添加下面两行,效果会清爽很多。 LOG_FILE = "TencentSpider.log" LOG_LEVEL = "INFO" Log levels
logging设置通过在setting.py中进行以下设置可以被用来配置logging:
items.py # -*- coding: utf-8 -*- # Define here the models for your scraped items # # See documentation in: # http://doc.scrapy.org/en/latest/topics/items.html import scrapy class TencentItem(scrapy.Item): # define the fields for your item here like: # 职位名 positionname = scrapy.Field() # 详情连接 positionlink = scrapy.Field() # 职位类别 positionType = scrapy.Field() # 招聘人数 peopleNum = scrapy.Field() # 工作地点 workLocation = scrapy.Field() # 发布时间 publishTime = scrapy.Field() ? ?pipelines.py ? # -*- coding: utf-8 -*- # Define your item pipelines here # # Don‘t forget to add your pipeline to the ITEM_PIPELINES setting # See: http://doc.scrapy.org/en/latest/topics/item-pipeline.html import json class TencentPipeline(object): def __init__(self): self.filename = open("tencent.json","w") def process_item(self,item,spider): text = json.dumps(dict(item),ensure_ascii = False) + ",n" self.filename.write(text.encode("utf-8")) return item def close_spider(self,spider): self.filename.close() ? ?2.爬取东莞 dongdong.py # -*- coding: utf-8 -*- import scrapy from scrapy.linkextractors import LinkExtractor from scrapy.spiders import CrawlSpider,Rule from newdongguan.items import NewdongguanItem class DongdongSpider(CrawlSpider): name = ‘dongdong‘ allowed_domains = [‘wz.sun0769.com‘] start_urls = [‘http://wz.sun0769.com/index.php/question/questionType?type=4&page=‘] # 每一页的匹配规则 pagelink = LinkExtractor(allow=("type=4")) # 每一页里的每个帖子的匹配规则 contentlink = LinkExtractor(allow=(r"/html/question/d+/d+.shtml")) rules = ( # 本案例的url被web服务器篡改,需要调用process_links来处理提取出来的url Rule(pagelink,process_links = "deal_links"), Rule(contentlink,callback = "parse_item") ) # links 是当前response里提取出来的链接列表 def deal_links(self,links): for each in links: each.url = each.url.replace("?","&").replace("Type&","Type?") return links def parse_item(self,response): item = NewdongguanItem() # 标题 item[‘title‘] = response.xpath(‘//div[contains(@class,"pagecenter p3")]//strong/text()‘).extract()[0] # 编号 item[‘number‘] = item[‘title‘].split(‘ ‘)[-1].split(":")[-1] # 内容,先使用有图片情况下的匹配规则,如果有内容,返回所有内容的列表集合 content = response.xpath(‘//div[@class="contentext"]/text()‘).extract() # 如果没有内容,则返回空列表,则使用无图片情况下的匹配规则 if len(content) == 0: content = response.xpath(‘//div[@class="c1 text14_2"]/text()‘).extract() #"".join(content).strip()把列表转换成字符串,去掉空格 item[‘content‘] = "".join(content).strip() else: item[‘content‘] = "".join(content).strip() # 链接 item[‘url‘] = response.url yield item ?items.py # -*- coding: utf-8 -*- # Define here the models for your scraped items # # See documentation in: # http://doc.scrapy.org/en/latest/topics/items.html import scrapy class NewdongguanItem(scrapy.Item): # define the fields for your item here like: # 标题 title = scrapy.Field() # 编号 number = scrapy.Field() # 内容 content = scrapy.Field() # 链接 url = scrapy.Field() ?3.pipelines.py # -*- coding: utf-8 -*- # Define your item pipelines here # # Don‘t forget to add your pipeline to the ITEM_PIPELINES setting # See: http://doc.scrapy.org/en/latest/topics/item-pipeline.html import codecs import json class NewdongguanPipeline(object): def __init__(self): # 创建一个文件 self.filename = codecs.open("donggguan.json","w",encoding = "utf-8") def process_item(self,spider): # 中文默认使用ascii码来存储,禁用后默认为Unicode字符串 content = json.dumps(dict(item),ensure_ascii=False) + "n" self.filename.write(content) return item def close_spider(self,spider): self.filename.close() (编辑:李大同) 【声明】本站内容均来自网络,其相关言论仅代表作者个人观点,不代表本站立场。若无意侵犯到您的权利,请及时与联系站长删除相关内容! |
- c# – String.Format vs ToString并使用InvariantCulture
- 30天学习Swift项目实战第三天--------本地视频播放器
- c# – 串口ReadLine vs ReadExisting或如何正确从串口读取数
- 常见密码正则表达式
- 每天学一点flash(50) 反余切的应用
- $.ajax({ url:"getCounterOfShop.action", type
- C 11元组与复制省略或移动语义
- 使用fastJSON解析HashMap中的数据
- RegEx指定的字符串长度范围:XSD属性元素
- ruby – Watir可以与Firefox Extensions交互吗?