
你可以使用
Custom Retry Middleware来做到这一点,你只需要覆盖
process_response当前
Retry Middleware的方法即可:
from scrapy.downloadermiddlewares.retry import RetryMiddlewarefrom scrapy.utils.response import response_status_messageclass CustomRetryMiddleware(RetryMiddleware): def process_response(self, request, response, spider): if request.meta.get('dont_retry', False): return response if response.status in self.retry_http_pres: reason = response_status_message(response.status) return self._retry(request, reason, spider) or response # this is your check if response.status == 200 and response.xpath(spider.retry_xpath): return self._retry(request, 'response got xpath "{}"'.format(spider.retry_xpath), spider) or response return response然后启用它,而不是默认
RetryMiddleware的
settings.py:
DOWNLOADER_MIDDLEWARES = { 'scrapy.downloadermiddlewares.retry.RetryMiddleware': None, 'myproject.middlewarefilepath.CustomRetryMiddleware': 550,}现在,你有了一个中间件,你可以在其中配置,
xpath以使用属性在
Spider内部重试
retry_xpath:
class MySpider(Spider): name = "myspidername" retry_xpath = '//h2[@]' ...
当“项目”的字段为空时,这不一定会重试,但是你可以在此
retry_xpath属性中指定该字段的相同路径以使其起作用。
欢迎分享,转载请注明来源:内存溢出
微信扫一扫
支付宝扫一扫
评论列表(0条)