Thanks for your reply.
The problem with crawling like a normal visitor, is that you miss any page
which can not be reached by clickable links. On our site, your script finds
9 out of 8,000+ pages.
Our site navigation is by javascript drop down menus,
and a search function. It's easy for visitors to navigate. Creating clickable
links to all 8,000+ pages would really require a sitemap. In other words,
I can't create a sitemap with your script until I already have a sitemap
for it to crawl.
Can you point me to the part of your sales pages that explains this limitation?
If yes, ok, my problem for not reading carefully enough, I'll eat the $20.
If not, how about a refund? That would be an agreeable resolution too.
For your info, I wrote a perl script in about an hour that reads our directory
structure and creates a list of URLs. Not a real XML map yet, that will
take a bit longer. This script finds the 8,000 files and creates the list
in about 1-2 seconds. You might consider coding this option for those
reporting all the crawling problems etc.
Anyway, if you would agree to issue a refund, I'll consider it a learning
experience, and wish you well with your project.
Your reply, or refund, is appreciated, thanks.