Thank you Oleg for your answer,
I resumed the crawling process, but it got stuck once again!
Here's the report of the crawling page:
-------
Links depth: 3
Current page: Austria_Immacolata_Tirolo_Innsbruck_Hotel_Bon_Alpina_Igls_offerte_FID8_AID11_CID73_RID117.html
Pages added to sitemap: 903
Pages scanned: 904 (38,473.0 KB)
Pages left: 618 (+ 5 queued for the next depth level)
Time passed: 0:03:30
Time left: 0:02:23
Memory usage: 2,284.6 Kb
Resuming the last session (last updated: 2012-11-20 14:52:21)
-------
Even if it worked, I cannot resume the crawling process every time it stops, since I need to set a chron job to do it daily.
In the previous installation, there was sort of a utility which re-ran the script after a while, I suppose to prevent scripting time-out: is it possible to activate it?