Python使用Srapy框架爬虫模拟登陆并抓取知乎内容
一、Cookie原理
前两个参数是Cookie应用的必要条件,另外,还包括Cookie大小(Size,不同浏览器对Cookie个数及大小限制是有差异的)。 二、模拟登陆 现在就来看看如何通过Scrapy实现表单提交。 首先查看登陆时的表单结果,依然像前面使用的技巧一样,故意输错密码,方面抓到登陆的网页头部和表单(我使用的Chrome自带的开发者工具中的Network功能) 查看抓取到的表单可以发现有四个部分:
发现我们的猜测是正确的 那么现在就可以来写表单登陆功能了 def start_requests(self): return [Request("https://www.zhihu.com/login",callback = self.post_login)] #重写了爬虫类的方法,实现了自定义请求,运行成功后会调用callback回调函数 #FormRequeset def post_login(self,response): print 'Preparing login' #下面这句话用于抓取请求网页后返回网页中的_xsrf字段的文字,用于成功提交表单 xsrf = Selector(response).xpath('//input[@name="_xsrf"]/@value').extract()[0] print xsrf #FormRequeset.from_response是Scrapy提供的一个函数,用于post表单 #登陆成功后,会调用after_login回调函数 return [FormRequest.from_response(response,formdata = { '_xsrf': xsrf,'email': '123456','password': '123456' },callback = self.after_login )] 其中主要的功能都在函数的注释中说明 CookiesMiddleware: 这个cookie中间件保存追踪web服务器发出的cookie,并将这个cookie在接来下的请求的时候进行发送 for i,url in enumerate(urls): yield scrapy.Request("http://www.example.com",meta={'cookiejar': i},callback=self.parse_page) def parse_page(self,response): # do some processing return scrapy.Request("http://www.example.com/otherpage",meta={'cookiejar': response.meta['cookiejar']},callback=self.parse_other_page) 那么可以对我们的爬虫类中方法进行修改,使其追踪cookie #重写了爬虫类的方法,运行成功后会调用callback回调函数 def start_requests(self): return [Request("https://www.zhihu.com/login",meta = {'cookiejar' : 1},callback = self.post_login)] #添加了meta #FormRequeset出问题了 def post_login(self,#"http://www.zhihu.com/login",meta = {'cookiejar' : response.meta['cookiejar']},#注意这里cookie的获取 headers = self.headers,callback = self.after_login,dont_filter = True )] 四、伪装头部 为了保险,我们可以在头部中填充更多的字段,如下 headers = { "Accept": "*/*","Accept-Encoding": "gzip,deflate","Accept-Language": "en-US,en;q=0.8,zh-TW;q=0.6,zh;q=0.4","Connection": "keep-alive","Content-Type":" application/x-www-form-urlencoded; charset=UTF-8","User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) AppleWebKit/537.36 (KHTML,like Gecko) Chrome/38.0.2125.111 Safari/537.36","Referer": "http://www.zhihu.com/" } 在scrapy中Request和FormRequest初始化的时候都有一个headers字段,可以自定义头部,这样我们可以添加headers字段 形成最终版的登陆函数 #!/usr/bin/env python # -*- coding:utf-8 -*- from scrapy.contrib.spiders import CrawlSpider,Rule from scrapy.selector import Selector from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor from scrapy.http import Request,FormRequest from zhihu.items import ZhihuItem class ZhihuSipder(CrawlSpider) : name = "zhihu" allowed_domains = ["www.zhihu.com"] start_urls = [ "http://www.zhihu.com" ] rules = ( Rule(SgmlLinkExtractor(allow = ('/question/d+#.*?',)),callback = 'parse_page',follow = True),Rule(SgmlLinkExtractor(allow = ('/question/d+',) headers = { "Accept": "*/*","Referer": "http://www.zhihu.com/" } #重写了爬虫类的方法,callback = self.post_login)] #FormRequeset出问题了 def post_login(self,headers = self.headers,#注意此处的headers formdata = { '_xsrf': xsrf,'email': '1095511864@qq.com',dont_filter = True )] def after_login(self,response) : for url in self.start_urls : yield self.make_requests_from_url(url) def parse_page(self,response): problem = Selector(response) item = ZhihuItem() item['url'] = response.url item['name'] = problem.xpath('//span[@class="name"]/text()').extract() print item['name'] item['title'] = problem.xpath('//h2[@class="zm-item-title zm-editable-content"]/text()').extract() item['description'] = problem.xpath('//div[@class="zm-editable-content"]/text()').extract() item['answer']= problem.xpath('//div[@class=" zm-editable-content clearfix"]/text()').extract() return item 五、Item类和抓取间隔 from scrapy.item import Item,Field class ZhihuItem(Item): # define the fields for your item here like: # name = scrapy.Field() url = Field() #保存抓取问题的url title = Field() #抓取问题的标题 description = Field() #抓取问题的描述 answer = Field() #抓取问题的答案 name = Field() #个人用户的名称 设置抓取间隔,访问由于爬虫的过快抓取,引发网站的发爬虫机制,在setting.py中设置 BOT_NAME = 'zhihu' SPIDER_MODULES = ['zhihu.spiders'] NEWSPIDER_MODULE = 'zhihu.spiders' DOWNLOAD_DELAY = 0.25 #设置下载间隔为250ms 更多设置可以查看官方文档 抓取结果(只是截取了其中很少一部分) ... 'url': 'http://www.zhihu.com/question/20688855/answer/16577390'} 2014-12-19 23:24:15+0800 [zhihu] DEBUG: Crawled (200) <GET http://www.zhihu.com/question/20688855/answer/15861368> (referer: http://www.zhihu.com/question/20688855/answer/19231794) [] 2014-12-19 23:24:15+0800 [zhihu] DEBUG: Scraped from <200 http://www.zhihu.com/question/20688855/answer/15861368> {'answer': [u'u9009u4f1au8ba1u8fd9u4e2au4e13u4e1auff0cu8003CPAuff0cu5165u8d22u52a1u8fd9u4e2au884cu5f53u3002u8fd9u4e00u8defu8d70u4e0bu6765uff0cu6211u53efu4ee5u5f88u80afu5b9au7684u544au8bc9u4f60uff0cu6211u662fu771fu7684u559cu6b22u8d22u52a1uff0cu70edu7231u8fd9u4e2au884cu4e1auff0cu56e0u6b64u575au5b9au4e0du79fbu5730u5728u8fd9u4e2au884cu4e1au4e2du8d70u4e0bu53bbu3002',u'u4e0du8fc7u4f60u8bf4u6709u4ebau4eceu5c0fu5c31u559cu6b22u8d22u52a1u5417uff1fu6211u89c9u5f97u51e0u4e4eu6ca1u6709u5427u3002u8d22u52a1u7684u9b45u529bu5728u4e8eu4f60u771fu6b63u61c2u5f97u5b83u4e4bu540eu3002',u'u901au8fc7u5b83uff0cu4f60u53efu4ee5u5b66u4e60u4efbu4f55u4e00u79cdu5546u4e1au7684u7ecfu8425u8fc7u7a0buff0cu4e86u89e3u5176u7eb7u7e41u5916u8868u4e0bu7684u5b9eu7269u6d41u3001u73b0u91d1u6d41uff0cu751au81f3u4f60u53efu4ee5u638cu63e1u5982u4f55u53bbu7ecfu8425u8fd9u79cdu5546u4e1au3002',u'u5982u679cu5bf9u4f1au8ba1u7684u8ba4u8bc6u4ec5u4ec5u505cu7559u5728u505au5206u5f55u8fd9u4e2au5c42u9762uff0cu5f53u7136u4f1au89c9u5f97u67afu71e5u65e0u5473u3002u5f53u4f60u5bf9u5b83u7684u8ba4u8bc6u8fdbu5165u5230u6df1u5c42u6b21u7684u65f6u5019uff0cu4f60u81eau7136u5c31u4f1au559cu6b22u4e0au5b83u4e86u3002nnn'],'description': [u'u672cu4ebau5b66u4f1au8ba1u6559u80b2u4e13u4e1auff0cu6df1u611fu5176u67afu71e5u4e4fu5473u3002nu5f53u521du662fu51b2u7740u5e08u8303u4e13u4e1au62a5u7684uff0cu56e0u4e3au68a6u60f3u662fu6210u4e3au4e00u540du8001u5e08uff0cu4f46u662fu611fu89c9u73b0u5728u666eu901au521du9ad8u4e2du8001u5e08u5df2u7ecfu8d8bu4e8eu9971u548cuff0cu800cu987au6bcdu4eb2u5927u4ebau7684u610fu9009u4e86u8fd9u4e2au4e13u4e1au3002u6211u559cu6b22u4e0au6559u80b2u5b66u7684u8bfeuff0cu5e76u597du7814u7a76u5404u79cdu6559u80b2u5fc3u7406u5b66u3002u4f46u4f1au8ba1u8bfeu4f3cu4e4eu662fu4e3bu6d41u3001u54ceu3002nnu4e00u76f4u4e0du559cu6b22u94b1u4e0du94b1u7684u4e13u4e1auff0cu6240u4ee5u5f88u597du5947u5927u5bb6u9009u4f1au8ba1u4e13u4e1au5230u5e95u662fu51fau4e8eu4ec0u4e48u76eeu7684u3002nnu6bd4u5982u8bf4u5b66u4e2du6587u7684u4f1au8bf4u4eceu5c0fu559cu6b22u770bu4e66uff0cu4f1au6709u4eceu5c0fu559cu6b22u4f1au8ba1u501fu554au8d37u554au7684u7684u4ebau5417uff1f'],'name': [],'title': [u'nn',u'nn'],'url': 'http://www.zhihu.com/question/20688855/answer/15861368'} ... 六、存在问题
(编辑:李大同) 【声明】本站内容均来自网络,其相关言论仅代表作者个人观点,不代表本站立场。若无意侵犯到您的权利,请及时与联系站长删除相关内容! |