python批量爬取xml文件
发布时间:2020-12-20 10:57:05 所属栏目:Python 来源:网络整理
导读:1.网站链接: https://www.cnvd.org.cn/shareData/list 2.需要下载的页面文件: 3.该页面需要登陆,然后批量下载共享漏洞文件,我们就通过cookie来实现。 #!/usr/bin/env python# -*- coding: utf-8 -*-"""Date: 2019-08-17Author: BobDescription: python爬
1.网站链接:
https://www.cnvd.org.cn/shareData/list
2.需要下载的页面文件: 3.该页面需要登陆,然后批量下载共享漏洞文件,我们就通过cookie来实现。 #!/usr/bin/env python # -*- coding: utf-8 -*- """ Date: 2019-08-17 Author: Bob Description: python爬取xml文件 """ import requests from bs4 import BeautifulSoup def cnvd_spider(): url = ‘https://www.cnvd.org.cn/shareData/list?max=240&offset=0‘ headers = { "Cookie": "__jsluid_s=65d5e7902f04498e89b16e93fb010b3c; __jsluid_h=1ab428e655aee36ac3c9835db29b6714; JSESSIONID=91BB91B37543D365AA64895EDFCD828F; __jsl_clearance=1566003116.655|0|CYPFsKirGYBG12qtoOrS5Kq1rM0%3D","User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_6) AppleWebKit/537.36 (KHTML,like Gecko) Chrome/76.0.3809.100 Safari/537.36",} html = requests.get(url=url,headers=headers).text soup = BeautifulSoup(html,‘lxml‘) links = soup.find_all(‘a‘,attrs={‘title‘: ‘下载xml‘}) for link in links: url = ‘https://www.cnvd.org.cn‘ + link.get(‘href‘) file_name = link.get_text() html_data = requests.get(url=url,headers=headers) with open(file_name,‘w‘) as f: f.write(html_data.content) if __name__ == ‘__main__‘: cnvd_spider() (编辑:李大同) 【声明】本站内容均来自网络,其相关言论仅代表作者个人观点,不代表本站立场。若无意侵犯到您的权利,请及时与联系站长删除相关内容! |