python3.7 scrapy async 錯誤
C:\Users\kay.lee>scrapy bench
2018-07-31 14:40:51 [scrapy.utils.log] INFO: Scrapy 1.5.1 started (bot: scrapybot)
2018-07-31 14:40:51 [scrapy.utils.log] INFO: Versions: lxml 4.2.3.0, libxml2 2.9.7, cssselect 1.0.3, parsel 1.5.0, w3lib 1.19.0, Twisted 18.7.0, Python 3.7.0 (v3.7.0:1bf9cc5093, Jun 27 2018, 04:06:47) [MSC v.1914 32 bit (Intel)], pyOpenSSL 18.0.0 (OpenSSL 1.1.0h 27 Mar 2018), cryptography 2.3, Platform Windows-10-10.0.17134-SP0
2018-07-31 14:40:54 [scrapy.crawler] INFO: Overridden settings: {'CLOSESPIDER_TIMEOUT': 10, 'LOGSTATS_INTERVAL': 1, 'LOG_LEVEL': 'INFO'}
Traceback (most recent call last):
File "c:\users\kay.lee\appdata\local\programs\python\python37-32\lib\runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "c:\users\kay.lee\appdata\local\programs\python\python37-32\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\Users\kay.lee\AppData\Local\Programs\Python\Python37-32\Scripts\scrapy.exe\__main__.py", line 9, in <module>
File "c:\users\kay.lee\appdata\local\programs\python\python37-32\lib\site-packages\scrapy\cmdline.py", line 150, in execute
_run_print_help(parser, _run_command, cmd, args, opts)
File "c:\users\kay.lee\appdata\local\programs\python\python37-32\lib\site-packages\scrapy\cmdline.py", line 90, in _run_print_help
func(*a, **kw)
File "c:\users\kay.lee\appdata\local\programs\python\python37-32\lib\site-packages\scrapy\cmdline.py", line 157, in _run_command
cmd.run(args, opts)
File "c:\users\kay.lee\appdata\local\programs\python\python37-32\lib\site-packages\scrapy\commands\bench.py", line 25, in run
self.crawler_process.crawl(_BenchSpider, total=100000)
File "c:\users\kay.lee\appdata\local\programs\python\python37-32\lib\site-packages\scrapy\crawler.py", line 170, in crawl
crawler = self.create_crawler(crawler_or_spidercls)
File "c:\users\kay.lee\appdata\local\programs\python\python37-32\lib\site-packages\scrapy\crawler.py", line 198, in create_crawler
return self._create_crawler(crawler_or_spidercls)
File "c:\users\kay.lee\appdata\local\programs\python\python37-32\lib\site-packages\scrapy\crawler.py", line 203, in _create_crawler
return Crawler(spidercls, self.settings)
File "c:\users\kay.lee\appdata\local\programs\python\python37-32\lib\site-packages\scrapy\crawler.py", line 55, in __init__
self.extensions = ExtensionManager.from_crawler(self)
File "c:\users\kay.lee\appdata\local\programs\python\python37-32\lib\site-packages\scrapy\middleware.py", line 58, in from_crawler
return cls.from_settings(crawler.settings, crawler)
File "c:\users\kay.lee\appdata\local\programs\python\python37-32\lib\site-packages\scrapy\middleware.py", line 34, in from_settings
mwcls = load_object(clspath)
File "c:\users\kay.lee\appdata\local\programs\python\python37-32\lib\site-packages\scrapy\utils\misc.py", line 44, in load_object
mod = import_module(module)
File "c:\users\kay.lee\appdata\local\programs\python\python37-32\lib\importlib\__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 728, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "c:\users\kay.lee\appdata\local\programs\python\python37-32\lib\site-packages\scrapy\extensions\telnet.py", line 12, in <module>
from twisted.conch import manhole, telnet
File "c:\users\kay.lee\appdata\local\programs\python\python37-32\lib\site-packages\twisted\conch\manhole.py", line 154
def write(self, data, async=False):
^
SyntaxError: invalid syntax
解決方法 :
編輯 :
c:\users\kay.lee\appdata\local\programs\python\python37-32\lib\site-packages\twisted\conch\manhole.py
由最後結果得知文件的錯誤 來自這句 def write(self, data, async=False)
在 manhole.py 中查找 async 全部換為其他名字 如 : shark
即改為
def write(self, data, shark=False)
再次測試OK .
C:\Users\kay.lee>scrapy bench
2018-07-31 14:56:56 [scrapy.utils.log] INFO: Scrapy 1.5.1 started (bot: scrapybot)
2018-07-31 14:56:56 [scrapy.utils.log] INFO: Versions: lxml 4.2.3.0, libxml2 2.9.7, cssselect 1.0.3, parsel 1.5.0, w3lib 1.19.0, Twisted 18.7.0, Python 3.7.0 (v3.7.0:1bf9cc5093, Jun 27 2018, 04:06:47) [MSC v.1914 32 bit (Intel)], pyOpenSSL 18.0.0 (OpenSSL 1.1.0h 27 Mar 2018), cryptography 2.3, Platform Windows-10-10.0.17134-SP0
2018-07-31 14:56:59 [scrapy.crawler] INFO: Overridden settings: {'CLOSESPIDER_TIMEOUT': 10, 'LOGSTATS_INTERVAL': 1, 'LOG_LEVEL': 'INFO'}
2018-07-31 14:56:59 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.closespider.CloseSpider',
'scrapy.extensions.logstats.LogStats']
2018-07-31 14:57:00 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2018-07-31 14:57:00 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
2018-07-31 14:57:00 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2018-07-31 14:57:00 [scrapy.core.engine] INFO: Spider opened
2018-07-31 14:57:00 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2018-07-31 14:57:01 [scrapy.extensions.logstats] INFO: Crawled 45 pages (at 2700 pages/min), scraped 0 items (at 0 items/min)
2018-07-31 14:57:02 [scrapy.extensions.logstats] INFO: Crawled 93 pages (at 2880 pages/min), scraped 0 items (at 0 items/min)
2018-07-31 14:57:03 [scrapy.extensions.logstats] INFO: Crawled 133 pages (at 2400 pages/min), scraped 0 items (at 0 items/min)
2018-07-31 14:57:04 [scrapy.extensions.logstats] INFO: Crawled 173 pages (at 2400 pages/min), scraped 0 items (at 0 items/min)
2018-07-31 14:57:05 [scrapy.extensions.logstats] INFO: Crawled 213 pages (at 2400 pages/min), scraped 0 items (at 0 items/min)
2018-07-31 14:57:06 [scrapy.extensions.logstats] INFO: Crawled 245 pages (at 1920 pages/min), scraped 0 items (at 0 items/min)
2018-07-31 14:57:07 [scrapy.extensions.logstats] INFO: Crawled 285 pages (at 2400 pages/min), scraped 0 items (at 0 items/min)
2018-07-31 14:57:08 [scrapy.extensions.logstats] INFO: Crawled 317 pages (at 1920 pages/min), scraped 0 items (at 0 items/min)
2018-07-31 14:57:09 [scrapy.extensions.logstats] INFO: Crawled 349 pages (at 1920 pages/min), scraped 0 items (at 0 items/min)
2018-07-31 14:57:10 [scrapy.core.engine] INFO: Closing spider (closespider_timeout)
2018-07-31 14:57:10 [scrapy.extensions.logstats] INFO: Crawled 381 pages (at 1920 pages/min), scraped 0 items (at 0 items/min)
2018-07-31 14:57:11 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 157484,
'downloader/request_count': 397,
'downloader/request_method_count/GET': 397,
'downloader/response_bytes': 1025739,
'downloader/response_count': 397,
'downloader/response_status_count/200': 397,
'finish_reason': 'closespider_timeout',
'finish_time': datetime.datetime(2018, 7, 31, 6, 57, 11, 688454),
'log_count/INFO': 17,
'request_depth_max': 14,
'response_received_count': 397,
'scheduler/dequeued': 397,
'scheduler/dequeued/memory': 397,
'scheduler/enqueued': 7940,
'scheduler/enqueued/memory': 7940,
'start_time': datetime.datetime(2018, 7, 31, 6, 57, 0, 834947)}
2018-07-31 14:57:11 [scrapy.core.engine] INFO: Spider closed (closespider_timeout)
2018-07-31 14:40:51 [scrapy.utils.log] INFO: Scrapy 1.5.1 started (bot: scrapybot)
2018-07-31 14:40:51 [scrapy.utils.log] INFO: Versions: lxml 4.2.3.0, libxml2 2.9.7, cssselect 1.0.3, parsel 1.5.0, w3lib 1.19.0, Twisted 18.7.0, Python 3.7.0 (v3.7.0:1bf9cc5093, Jun 27 2018, 04:06:47) [MSC v.1914 32 bit (Intel)], pyOpenSSL 18.0.0 (OpenSSL 1.1.0h 27 Mar 2018), cryptography 2.3, Platform Windows-10-10.0.17134-SP0
2018-07-31 14:40:54 [scrapy.crawler] INFO: Overridden settings: {'CLOSESPIDER_TIMEOUT': 10, 'LOGSTATS_INTERVAL': 1, 'LOG_LEVEL': 'INFO'}
Traceback (most recent call last):
File "c:\users\kay.lee\appdata\local\programs\python\python37-32\lib\runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "c:\users\kay.lee\appdata\local\programs\python\python37-32\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\Users\kay.lee\AppData\Local\Programs\Python\Python37-32\Scripts\scrapy.exe\__main__.py", line 9, in <module>
File "c:\users\kay.lee\appdata\local\programs\python\python37-32\lib\site-packages\scrapy\cmdline.py", line 150, in execute
_run_print_help(parser, _run_command, cmd, args, opts)
File "c:\users\kay.lee\appdata\local\programs\python\python37-32\lib\site-packages\scrapy\cmdline.py", line 90, in _run_print_help
func(*a, **kw)
File "c:\users\kay.lee\appdata\local\programs\python\python37-32\lib\site-packages\scrapy\cmdline.py", line 157, in _run_command
cmd.run(args, opts)
File "c:\users\kay.lee\appdata\local\programs\python\python37-32\lib\site-packages\scrapy\commands\bench.py", line 25, in run
self.crawler_process.crawl(_BenchSpider, total=100000)
File "c:\users\kay.lee\appdata\local\programs\python\python37-32\lib\site-packages\scrapy\crawler.py", line 170, in crawl
crawler = self.create_crawler(crawler_or_spidercls)
File "c:\users\kay.lee\appdata\local\programs\python\python37-32\lib\site-packages\scrapy\crawler.py", line 198, in create_crawler
return self._create_crawler(crawler_or_spidercls)
File "c:\users\kay.lee\appdata\local\programs\python\python37-32\lib\site-packages\scrapy\crawler.py", line 203, in _create_crawler
return Crawler(spidercls, self.settings)
File "c:\users\kay.lee\appdata\local\programs\python\python37-32\lib\site-packages\scrapy\crawler.py", line 55, in __init__
self.extensions = ExtensionManager.from_crawler(self)
File "c:\users\kay.lee\appdata\local\programs\python\python37-32\lib\site-packages\scrapy\middleware.py", line 58, in from_crawler
return cls.from_settings(crawler.settings, crawler)
File "c:\users\kay.lee\appdata\local\programs\python\python37-32\lib\site-packages\scrapy\middleware.py", line 34, in from_settings
mwcls = load_object(clspath)
File "c:\users\kay.lee\appdata\local\programs\python\python37-32\lib\site-packages\scrapy\utils\misc.py", line 44, in load_object
mod = import_module(module)
File "c:\users\kay.lee\appdata\local\programs\python\python37-32\lib\importlib\__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 728, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "c:\users\kay.lee\appdata\local\programs\python\python37-32\lib\site-packages\scrapy\extensions\telnet.py", line 12, in <module>
from twisted.conch import manhole, telnet
File "c:\users\kay.lee\appdata\local\programs\python\python37-32\lib\site-packages\twisted\conch\manhole.py", line 154
def write(self, data, async=False):
^
SyntaxError: invalid syntax
解決方法 :
編輯 :
c:\users\kay.lee\appdata\local\programs\python\python37-32\lib\site-packages\twisted\conch\manhole.py
由最後結果得知文件的錯誤 來自這句 def write(self, data, async=False)
在 manhole.py 中查找 async 全部換為其他名字 如 : shark
即改為
def write(self, data, shark=False)
再次測試OK .
C:\Users\kay.lee>scrapy bench
2018-07-31 14:56:56 [scrapy.utils.log] INFO: Scrapy 1.5.1 started (bot: scrapybot)
2018-07-31 14:56:56 [scrapy.utils.log] INFO: Versions: lxml 4.2.3.0, libxml2 2.9.7, cssselect 1.0.3, parsel 1.5.0, w3lib 1.19.0, Twisted 18.7.0, Python 3.7.0 (v3.7.0:1bf9cc5093, Jun 27 2018, 04:06:47) [MSC v.1914 32 bit (Intel)], pyOpenSSL 18.0.0 (OpenSSL 1.1.0h 27 Mar 2018), cryptography 2.3, Platform Windows-10-10.0.17134-SP0
2018-07-31 14:56:59 [scrapy.crawler] INFO: Overridden settings: {'CLOSESPIDER_TIMEOUT': 10, 'LOGSTATS_INTERVAL': 1, 'LOG_LEVEL': 'INFO'}
2018-07-31 14:56:59 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.closespider.CloseSpider',
'scrapy.extensions.logstats.LogStats']
2018-07-31 14:57:00 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2018-07-31 14:57:00 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
2018-07-31 14:57:00 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2018-07-31 14:57:00 [scrapy.core.engine] INFO: Spider opened
2018-07-31 14:57:00 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2018-07-31 14:57:01 [scrapy.extensions.logstats] INFO: Crawled 45 pages (at 2700 pages/min), scraped 0 items (at 0 items/min)
2018-07-31 14:57:02 [scrapy.extensions.logstats] INFO: Crawled 93 pages (at 2880 pages/min), scraped 0 items (at 0 items/min)
2018-07-31 14:57:03 [scrapy.extensions.logstats] INFO: Crawled 133 pages (at 2400 pages/min), scraped 0 items (at 0 items/min)
2018-07-31 14:57:04 [scrapy.extensions.logstats] INFO: Crawled 173 pages (at 2400 pages/min), scraped 0 items (at 0 items/min)
2018-07-31 14:57:05 [scrapy.extensions.logstats] INFO: Crawled 213 pages (at 2400 pages/min), scraped 0 items (at 0 items/min)
2018-07-31 14:57:06 [scrapy.extensions.logstats] INFO: Crawled 245 pages (at 1920 pages/min), scraped 0 items (at 0 items/min)
2018-07-31 14:57:07 [scrapy.extensions.logstats] INFO: Crawled 285 pages (at 2400 pages/min), scraped 0 items (at 0 items/min)
2018-07-31 14:57:08 [scrapy.extensions.logstats] INFO: Crawled 317 pages (at 1920 pages/min), scraped 0 items (at 0 items/min)
2018-07-31 14:57:09 [scrapy.extensions.logstats] INFO: Crawled 349 pages (at 1920 pages/min), scraped 0 items (at 0 items/min)
2018-07-31 14:57:10 [scrapy.core.engine] INFO: Closing spider (closespider_timeout)
2018-07-31 14:57:10 [scrapy.extensions.logstats] INFO: Crawled 381 pages (at 1920 pages/min), scraped 0 items (at 0 items/min)
2018-07-31 14:57:11 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 157484,
'downloader/request_count': 397,
'downloader/request_method_count/GET': 397,
'downloader/response_bytes': 1025739,
'downloader/response_count': 397,
'downloader/response_status_count/200': 397,
'finish_reason': 'closespider_timeout',
'finish_time': datetime.datetime(2018, 7, 31, 6, 57, 11, 688454),
'log_count/INFO': 17,
'request_depth_max': 14,
'response_received_count': 397,
'scheduler/dequeued': 397,
'scheduler/dequeued/memory': 397,
'scheduler/enqueued': 7940,
'scheduler/enqueued/memory': 7940,
'start_time': datetime.datetime(2018, 7, 31, 6, 57, 0, 834947)}
2018-07-31 14:57:11 [scrapy.core.engine] INFO: Spider closed (closespider_timeout)
留言
張貼留言