
是和否[1]。如果您获取一个pdf文件,它将被存储在内存中,但是如果pdf文件的大小不足以填满您的可用内存,那就可以了。
您可以将PDF保存在Spider回调中:
def parse_listing(self, response): # ... extract pdf urls for url in pdf_urls: yield Request(url, callback=self.save_pdf)def save_pdf(self, response): path = self.get_path(response.url) with open(path, "wb") as f: f.write(response.body)
如果选择在管道中执行此 *** 作:
# in the spiderdef parse_pdf(self, response): i = MyItem() i['body'] = response.body i['url'] = response.url # you can add more metadata to the item return i# in your pipelinedef process_item(self, item, spider): path = self.get_path(item['url']) with open(path, "wb") as f: f.write(item['body']) # remove body and add path as reference del item['body'] item['path'] = path # let item be processed by other pipelines. ie. db store return item
[1]另一种方法可能是仅存储pdf的url,并使用另一种方法来获取文档而不会缓冲到内存中。(例如
wget)
欢迎分享,转载请注明来源:内存溢出
微信扫一扫
支付宝扫一扫
评论列表(0条)