
要并行获取多个网址,一次最多限制20个连接:
import urllib2from multiprocessing.dummy import Pooldef generate_urls(): # generate some dummy urls for i in range(100): yield 'http://example.com?param=%d' % idef get_url(url): try: return url, urllib2.urlopen(url).read(), None except EnvironmentError as e: return url, None, epool = Pool(20) # limit number of concurrent connectionsfor url, result, error in pool.imap_unordered(get_url, generate_urls()): if error is None: print result,
欢迎分享,转载请注明来源:内存溢出
微信扫一扫
支付宝扫一扫
评论列表(0条)