Simulation of processing by Search Engines robots

You may choose one of the identification options for our Web-crawler* (Search Engine Bot), which does indexing of your website:
  • Standard browser crawler uses this option by default and is a recommended one. Your website will load the same way your regular visitors see it.
  • YandexBot this option is used to index your website as Yandex search robot sees it. Our crawler will be signed as the main Yandex indexing robot (YandexBot/3.0)
  • Googlebot this option is used to index your website as Google search robot sees it. Crawler will be signed as Google web-search robot Google (Googlebot/2.1)
  • Mysitemapgenerator use direct identification of our robot if you need separate control settings and an ability to manage website access
Pay attention to the features of robots.txt file processing when choosing different identification ways:
  • When choosing "YandexBot", "GoogleBot" or "Mysitemapgenerator" options only instructions for a particular robot are considered (User-agent: Yandex, User-agent: Googlebot, User-agent: Mysitemapgenerator respectively). General instructions of User-agent: * sections will be used only when "personal" ones are missing.
  • If you are using "Standard browser" or "Mysitemapgenerator" - crawler will consider only instructions in Mysitemapgenerator section or general section of User-agent: *. "Personal" sections of User-agent: Yandex or User-agent: Googlebot and others are not considered.