shutterstock_184473665.jpg

Summit 7 Team Blogs

FS4SP Crawling Will Not Stop

Many of us have encountered the scenario where we want to stop a crawl in our FAST Connector Search Service Application (SSA) and the status changes to “Stopping” but never stops.

If you have not, you probably will at some point. I thought it might be useful to post an explanation for those who wondered what was happening to the “brakes” on the crawler.

In my first experience with this phenomenon, we had crawlers spread across three application servers and multiple document processors on the FAST Search for SharePoint 2010 (FS4SP) farm plus two content distributors. Everything was working wonderfully.

Then our developers installed a new processing pipeline extension that they needed to test. To reduce the number of log files to be reviewed during the testing, they paused all but one document processor and started a crawl of a small test content source. No problem, the crawl was proceeding fine albeit a bit slow.

The problem really was inter-team communications. The testing team did not tell anyone else what they had done. So another team decided to perform a full crawl of a larger content source to document the amount of time it would take. Bad decision under the circumstances.

The crawlers began to crawl content at full speed and almost immediately began to spin their wheels so to speak. Performance counters showed almost no successes and lots of open items. Phone calls revealed the pipeline testing in progress. OOPS!

The crawls were stopped. Or, at least, stops were attempted. On the management page the status of all crawls were “stopping.” The next day the crawls were still “stopping.” That is when I got the phone call.

After hearing what had happened, I had all the paused document processors started. In a few minutes all crawls stopped at almost exactly the same time. So what happened?

It seems that crawlers store crawled content in memory until it can be passed to the content distributors on the FS4SP farm. This happens rather quickly. After logging the event in the crawl log, the crawler can proceed. Or stop, if that is the instructions. Since the crawlers have no capability for persistent storage on the file system of the content which they worked so hard to retrieve, they will not shut down (peacefully) if the content dispatcher will not accept the content. The content dispatchers also have no capability for persistent storage but will not accept content if there is no docprocessor available to accept it.

Even a shutdown of the OSearch14 service takes a while to force the crawlers to dump content. BTW, such a shutdown will probably force a full crawl since the crawl log may be unusable after this action.

In our case, when the additional document processors were started the backlog of crawled content cleared and the crawlers gracefully accepted the “stop” instructions and shut down.

Just some notes to keep in mind when testing or troubleshooting the crawling / processing / indexing pipelines of the FS4SP components. As usual, I was more focused on understanding than on the technical terms and steps of the processes.

SHARE THIS STORY | |