New User? Sign Up
Search Home Yahoo! Help > Search Help > Yahoo! Slurp > Why are you crawling my site?
Yahoo! Slurp is Yahoo!'s
web-indexing robot. The Yahoo! Slurp crawler collects documents from the Web to build a searchable index for search services using the Yahoo! search engine
. These documents are discovered and crawled because other web pages contain links directing to these documents. As part of the crawling effort, the Yahoo! Slurp crawler will take robots.txt standards into account to ensure we do not crawl and index content from those pages whose content you do not want included in Yahoo! Search Technology. If a page is disallowed to be crawled by robots.txt standards, Yahoo! will not read or use the contents of that page. The URL of a protected page may be included in Yahoo! Search Technology as a "thin" document with no text content. Links and reference text from other public web pages provide identifiable information about a URL and may be indexed as part of web search coverage.
Back to Slurp Help