This article has multiple issues. Please help improve it or discuss these issues on the talk page. (Learn how and when to remove these template messages)
Googlebot is the web crawler software used by Google, which collects documents from the web to build a searchable index for the Google Search engine. This name is actually used to refer to two different types of web crawlers: a desktop crawler (to simulate desktop users) and a mobile crawler (to simulate a mobile user).
A website will probably be crawled by both Googlebot Desktop and Googlebot Mobile. However Google announced that, starting from September 2020, all sites were switched to mobile-first indexing, meaning Google is crawling the web using a smartphone Googlebot. The subtype of Googlebot can be identified by looking at the user agent string in the request. However, both crawler types obey the same product token (useent token) in robots.txt, and so a developer cannot selectively target either Googlebot mobile or Googlebot desktop using robots.txt.
If a webmaster wishes to restrict the information on their site available to a Googlebot, or another well-behaved spider, they can do so with the appropriate directives in a robots.txt file, or by adding the meta tag
<meta name="Googlebot" content="nofollow" /> to the web page. Googlebot requests to Web servers are identifiable by a user-agent string containing "Googlebot" and a host address containing "googlebot.com".
A problem that webmasters with low-bandwidth Web hosting plans have often noted with the Googlebot is that it takes up an enormous amount of bandwidth. This can cause websites to exceed their bandwidth limit and be taken down temporarily. This is especially troublesome for mirror sites which host many gigabytes of data. Google provides "Search Console" that allow website owners to throttle the crawl rate.
How often Googlebot will crawl a site depends on the crawl budget. Crawl budget is an estimation of how often a website is updated. Technically, Googlebot's development team (Crawling and Indexing team) uses several defined terms internally to takes over what "crawl budget" stands for. Since May 2019, Googlebot uses the latest Chromium rendering engine, which supports ECMAScript 6 features. This will make the bot a bit more "evergreen" and ensure that it is not relying on an outdated rendering engine compared to browser capabilities.
Mediabot is the web crawler that Google uses for analysing the content so Google AdSense can serve contextually relevant advertising to a web page. Mediabot identifies itself with the user agent string "Mediapartners-Google/2.1".
Unlike other crawlers, Media bot does not follow links to discover new crawlable URLs, instead only visiting URLs that have included the AdSense code. Where that content resides behind a login, the crawler can be given a log in so that it is able to crawl protected content.
- "Googlebot". Google. 2019-03-11. Retrieved 2019-03-11.
- "Announcing mobile first indexing for the whole web". Google Developers. Retrieved 2021-03-17.
- "Google Search Console". Google.com.
- "Google Search Console". search.google.com. Retrieved 2019-03-11.
- "The new evergreen Googlebot". Official Google Webmaster Central Blog. Retrieved 2019-06-07.
- "Google - Webmasters". Retrieved 2012-12-15.
- "What Crawl Budget Means for Googlebot". Official Google Webmaster Central Blog. Retrieved 2018-07-04.
- "The new evergreen Googlebot". Official Google Webmaster Central Blog. Retrieved 2019-06-17.
- "About the AdSense Crawler".
- "Display ads on login-protected pages".