长效ip在Python scrapy框架的教程 使用固定长效代理ip的使用教程
时间:2023-01-27 来源:枫之叶网络
最近枫之叶网络客服遇到客户用python语言使用scrapy框架用长效ip代理来进行大数据采集,以下是小编汇总的scrapy的demon:
import base64
# 代理服务器
proxyServer = "http://www.pachongdaili.com:65535"
# 代理隧道验证信息
proxyUser = "pachongdaili"
proxyPass = "pachongdaili"
# for Python2
proxyAuth = "Basic " + base64.b64encode(proxyUser + ":" + proxyPass)
# for Python3
#proxyAuth = "Basic " + base64.urlsafe_b64encode(bytes((proxyUser + ":" + proxyPass), "ascii")).decode("utf8")
class ProxyMiddleware(object):
def process_request(self, request, spider):
request.meta["proxy"] = proxyServer
# 适配 scrapy 2.6.2
request.meta["_auth_proxy"] = proxyServer
request.headers["Proxy-Authorization"] = proxyAuth
request.headers["Connection"] = "close"
当前地址:http://www.pachongdaili.com/support/a466.html 客服联系QQ:475685360
最近枫之叶网络客服遇到客户用python语言使用scrapy框架用长效ip代理来进行大数据采集,以下是小编汇总的scrapy的demon:
import base64
# 代理服务器
proxyServer = "http://www.pachongdaili.com:65535"
# 代理隧道验证信息
proxyUser = "pachongdaili"
proxyPass = "pachongdaili"
# for Python2
proxyAuth = "Basic " + base64.b64encode(proxyUser + ":" + proxyPass)
# for Python3
#proxyAuth = "Basic " + base64.urlsafe_b64encode(bytes((proxyUser + ":" + proxyPass), "ascii")).decode("utf8")
class ProxyMiddleware(object):
def process_request(self, request, spider):
request.meta["proxy"] = proxyServer
# 适配 scrapy 2.6.2
request.meta["_auth_proxy"] = proxyServer
request.headers["Proxy-Authorization"] = proxyAuth
request.headers["Connection"] = "close"
当前地址:http://www.pachongdaili.com/support/a466.html 客服联系QQ:475685360