dnf手游360客户端
1.71 GB · 2025-11-12
大家好,我是 倔强青铜三。欢迎关注我,微信公众号: 倔强青铜三。点赞、收藏、关注,一键三连!
你能否想象不安装任何第三方库就能从网上爬数据?当初学者们都在追捧requests库时,Python自带的核武器却被忽略了!
今天带你用urllib.request打开网络编程新世界,全程无第三方依赖,只需原生Python!
urllib.request是Python标准库中的HTTP瑞士军刀,无需额外安装,直接import即可。它能完成:
用urlopen()发起首个请求:
import urllib.request
# 获取示例网站内容
with urllib.request.urlopen("https://httpbin.org/get") as response:
status = response.status
headers = dict(response.getheaders())
content = response.read().decode("utf-8")
print(f"状态码: {status}")
print(f"响应头类型: {headers.get('Content-Type')}")
print(f"内容片段: {content[:200]}...")
运行结果:
状态码: 200
响应头类型: application/json
内容片段: {
"args": {},
"headers": {
"Accept-Encoding": "identity",
"Host": "httpbin.org",
"User-Agent": "Python-urllib/3.10",
...
},
"origin": "xxx.xxx.xxx.xxx",
"url": "https://httpbin.org/get"...
许多网站屏蔽Python默认UA(User-Agent),用Request对象突破封锁:
import urllib.request
# 伪装Chrome浏览器
headers = {"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) Chrome/91.0.4472.124"}
req = urllib.request.Request(
url="https://httpbin.org/user-agent",
headers=headers
)
# 发送请求
with urllib.request.urlopen(req) as response:
print(response.read().decode('utf-8'))
运行结果:
{
"user-agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) Chrome/91.0.4472.124"
}
网站看到的是伪装后的浏览器标识!
实现登录等操作必须掌握POST:
import urllib.request
import urllib.parse
# 构造表单数据
form_data = {"user": "admin", "pass": "secret123"}
encoded_data = urllib.parse.urlencode(form_data).encode("utf-8")
# 创建POST请求
req = urllib.request.Request(
url="https://httpbin.org/post",
data=encoded_data,
method="POST"
)
# 发送并获取结果
with urllib.request.urlopen(req) as response:
print(response.read().decode("utf-8"))
运行结果:
{
"form": {
"pass": "secret123",
"user": "admin"
},
"headers": {
"Content-Length": "25",
"Content-Type": "application/x-www-form-urlencoded"
},
...
}
网络请求充满不确定性,必须添加异常处理:
import urllib.error
targets = [
"https://不存在的网站.com", # 域名错误
"https://httpbin.org/status/404", # 404页面
]
for url in targets:
try:
with urllib.request.urlopen(url) as response:
print(f"{url} 访问成功")
except urllib.error.URLError as e:
print(f"️ {url} 访问失败: {e.reason}")
except urllib.error.HTTPError as e:
print(f" {url} 服务器错误: {e.code} {e.msg}")
运行结果:
️ https://不存在的网站.com 访问失败: [Errno 11001] getaddrinfo failed
https://httpbin.org/status/404 服务器错误: 404 NOT FOUND
综合应用所有知识:
import urllib.request
import urllib.error
url = "http://example.com"
file_path = "example.html"
try:
# 设置浏览器UA头
req = urllib.request.Request(
url,
headers={"User-Agent": "Mozilla/5.0"}
)
# 超时设置为10秒
with urllib.request.urlopen(req, timeout=10) as response:
# 检测内容编码
content_type = response.headers.get("Content-Type")
encoding = "utf-8"
if "charset=" in content_type:
encoding = content_type.split("charset=")[-1]
# 写入文件
html = response.read().decode(encoding)
with open(file_path, "w", encoding=encoding) as f:
f.write(html)
print(f"网页已保存至 {file_path}")
except Exception as e:
print(f" 抓取失败: {str(e)}")
运行结果:
网页已保存至 example.html
打开example.html可看到完整的Example首页
乱码问题
response.headers.get('Content-Type')中的编码超时崩溃
调用时设置urlopen(url, timeout=10)
HTTPS证书错误
测试环境可临时忽略:
import ssl
context = ssl._create_unverified_context()
response = urllib.request.urlopen(url, context=context)
永远记住
掌握这些技巧后,用纯Python标准库处理网络请求将畅通无阻!