Python使用urllib2模块抓取HTML页面资源的实例分享
发布时间:2020-12-16 20:43:09 所属栏目:Python 来源:网络整理
导读:先把要抓取的网络地址列在单独的list文件中 https://www.oudahe.com/p/22508/lhttps://www.oudahe.com/p/22509/lhttps://www.oudahe.com/p/5599/lhttps://www.oudahe.com/p/8659/l 然后我们来看程序操作,代码如下: #!/usr/bin/pythonimport osimport sysim
先把要抓取的网络地址列在单独的list文件中 https://www.oudahe.com/p/22508/l https://www.oudahe.com/p/22509/l https://www.oudahe.com/p/5599/l https://www.oudahe.com/p/8659/l 然后我们来看程序操作,代码如下: #!/usr/bin/python import os import sys import urllib2 import re def Cdown_data(fileurl,fpath,dpath): if not os.path.exists(dpath): os.makedirs(dpath) try: getfile = urllib2.urlopen(fileurl) data = getfile.read() f = open(fpath,'w') f.write(data) f.close() except: print with open('u1.list') as lines: for line in lines: URI = line.strip() if '?' and '%' in URI: continue elif URI.count('/') == 2: continue elif URI.count('/') > 2: #print URI,URI.count('/') try: dirpath = URI.rpartition('/')[0].split('//')[1] #filepath = URI.split('//')[1].split('/')[1] filepath = URI.split('//')[1] if filepath: print URI,filepath,dirpath Cdown_data(URI,dirpath) except: print URI,'error' (编辑:李大同) 【声明】本站内容均来自网络,其相关言论仅代表作者个人观点,不代表本站立场。若无意侵犯到您的权利,请及时与联系站长删除相关内容! |