加入收藏 | 设为首页 | 会员中心 | 我要投稿 李大同 (https://www.lidatong.com.cn/)- 科技、建站、经验、云计算、5G、大数据,站长网!
当前位置: 首页 > 编程开发 > Python > 正文

URL可以在浏览器或wget中正常工作,但是从Python或cURL中清空

发布时间:2020-12-16 21:43:38 所属栏目:Python 来源:网络整理
导读:当我尝试从Python打开http://www.comicbookdb.com/browse.php(在我的浏览器中工作正常)时,我得到一个空响应: import urllib.request content = urllib.request.urlopen('http://www.comicbookdb.com/browse.php') print(content.read())b'' 设置User-agent

当我尝试从Python打开http://www.comicbookdb.com/browse.php(在我的浏览器中工作正常)时,我得到一个空响应:

>>> import urllib.request
>>> content = urllib.request.urlopen('http://www.comicbookdb.com/browse.php')
>>> print(content.read())
b''

设置User-agent时也会发生同样的情况:

>>> opener = urllib.request.build_opener()
>>> opener.addheaders = [('User-agent','Mozilla/5.0')]
>>> content = opener.open('http://www.comicbookdb.com/browse.php')
>>> print(content.read())
b''

或者当我使用httplib2时:

>>> import httplib2
>>> h = httplib2.Http('.cache')
>>> response,content = h.request('http://www.comicbookdb.com/browse.php')
>>> print(content)
b''
>>> print(response)
{'cache-control': 'no-store,no-cache,must-revalidate,post-check=0,pre-check=0','content-location': 'http://www.comicbookdb.com/browse.php','expires': 'Thu,19 Nov 1981 08:52:00 GMT','content-length': '0','set-cookie': 'PHPSESSID=590f5997a91712b7134c2cb3291304a8; path=/','date': 'Wed,25 Dec 2013 15:12:30 GMT','server': 'Apache','pragma': 'no-cache','content-type': 'text/html','status': '200'}

或者当我尝试使用cURL下载它时:

C:&;curl -v http://www.comicbookdb.com/browse.php
* About to connect() to www.comicbookdb.com port 80
*   Trying 208.76.81.137... * connected
* Connected to www.comicbookdb.com (208.76.81.137) port 80
> GET /browse.php HTTP/1.1
User-Agent: curl/7.13.1 (i586-pc-mingw32msvc) libcurl/7.13.1 zlib/1.2.2
Host: www.comicbookdb.com
Pragma: no-cache
Accept: */*

< HTTP/1.1 200 OK
< Date: Wed,25 Dec 2013 15:20:06 GMT
< Server: Apache
< Expires: Thu,19 Nov 1981 08:52:00 GMT
< Cache-Control: no-store,pre-check=0
< Pragma: no-cache
< Set-Cookie: PHPSESSID=0a46f2d390639da7eb223ad47380b394; path=/
< Content-Length: 0
< Content-Type: text/html
* Connection #0 to host www.comicbookdb.com left intact
* Closing connection #0

在浏览器中打开URL或使用Wget下载它似乎工作正常,但是:

C:&;wget http://www.comicbookdb.com/browse.php
--16:16:26--  http://www.comicbookdb.com/browse.php
           => `browse.php'
Resolving www.comicbookdb.com... 208.76.81.137
Connecting to www.comicbookdb.com[208.76.81.137]:80... connected.
HTTP request sent,awaiting response... 200 OK
Length: unspecified [text/html]

    [    <=>                              ] 40,687        48.75K/s

16:16:27 (48.75 KB/s) - `browse.php' saved [40687]

与从同一服务器下载不同的文件一样:

>>> content = urllib.request.urlopen('http://www.comicbookdb.com/index.php')
>>> print(content.read(100))
b'

那么为什么其他URL不起作用呢?

最佳答案
似乎服务器需要一个Connection:keep-alive标头,例如,curl(我希望其他失败的客户端)也不会默认添加.

使用curl,您可以使用此命令,该命令将显示非空响应:

curl -v -H 'Connection: keep-alive' http://www.comicbookdb.com/browse.php

使用Python,您可以使用以下代码:

import httplib2
h = httplib2.Http('.cache')
response,content = h.request('http://www.comicbookdb.com/browse.php',headers={'Connection':'keep-alive'})
print(content)
print(response)

(编辑:李大同)

【声明】本站内容均来自网络,其相关言论仅代表作者个人观点,不代表本站立场。若无意侵犯到您的权利,请及时与联系站长删除相关内容!

    推荐文章
      热点阅读