How to simulate crawls by search engines robots?
You may choose one of the identification options for our Web-crawler* (Search Engine Bot), which does crawling of your website:
- Standard browser – crawler uses this option by default and is a recommended one. Your website will load the same way your regular visitors see it.
- YandexBot – this option is used to crawl your website as Yandex search bot sees it. Our crawler will be signed as the main Yandex bot (YandexBot/3.0)
- Googlebot – this option is used to crawl your website as Google search bot sees it. Crawler will be signed as Google's web search bot (Googlebot/2.1)
- Baiduspider - Baidu Web Search Bot
- Mysitemapgenerator – use direct identification of our crawler if you need separate control settings and an ability to manage website access
- When choosing YandexBot, GoogleBot, Baiduspider or Mysitemapgenerator options only instructions for a particular robot are considered (User-agent: Yandex, User-agent: Googlebot, User-agent: Mysitemapgenerator – respectively). General instructions of User-agent: * sections will be used only when "personal" ones are missing.
- If you are using Standard Browser or Mysitemapgenerator - crawler will consider only instructions in Mysitemapgenerator section or general section of User-agent: *. "Personal" sections of User-agent: Yandex or User-agent: Googlebot and others are not considered.