The parameters that you told me to use on your software do NOT exist.
So this is what I did:
Crawler Limitations, Finetune (optional)
Maximum pages: 0 "0" for unlimited
Maximum depth level: 0 "0" for unlimited
Maximum execution time, seconds: 0 "0" for unlimited
Save the script state, every X seconds: 20
this option allows to resume crawling operation if it was interrupted. "0" for no saves
Make a delay between requests, X seconds after each N requests: 0
s after each requests
This option allows to reduce the load on your webserver. "0" for no delay
And on the Crawling page, this is what I did:
Run in background X checked
Do not interrupt the script even after closing the browser window until the crawling is complete
Resume last session X checked
Continue the interrupted session (2009-05-30 19:53:31, URLs added: 70088, estimated URLs left: 1)
Click button below to start crawl manually:
Please respond. Thank you.