That makes sense. However, the contents of my
do not parse is identical to the contents of my
excluded URLs. So unparsed URLs should not be in the final sitemap.
I think I've found the problem, though. I had set crawl depth to 4. and that last level really flies through. Now I've set my crawl depth to 5 with the following result:
51 new URLs found - which are getting the title problem described above, and
the ones that
did have a title problem are now fine.
So the problem seems to come from the way the last set level is dealt with in the crawl.
This experiment also means I need to rethink a couple of settings, not least of which is the depth of some of the pages!!