Robots exclusion standard
The robots exclusion standard, also known as the robots exclusion protocol or robots.txt protocol, is a standard used by websites to communicate with web crawlers and other web robots. The standard specifies the instruction format to be used to inform the robot about which areas of the website should not be processed or scanned. Robots are often used by search engines to categorize and archive web sites, or by webmasters to proofread source code. Not all robots cooperate with the standard including email harvesters, spambots and malware robots that scan for security vulnerabilities. The standard is different from, but can be used in conjunction with Sitemaps, a robot inclusion standard for websites.
The standard was proposed by Martijn Koster, when working for Nexor in February, 1994 on the www-talk mailing list, the main communication channel for WWW-related activities at the time. Charles Stross claims to have provoked Koster to suggest robots.txt, after he wrote a badly-behaved web crawler that caused an inadvertent denial of service attack on Koster's server.
It quickly became a de facto standard that present and future web crawlers were expected to follow; most complied, including those operated by search engines such as WebCrawler, Lycos and AltaVista.
About the standard
When a site owner wishes to give instructions to web robots they place a text file called robots.txt in the root of the web site hierarchy (e.g. https://www.example.com/robots.txt). This text file contains the instructions in a specific format (see examples below). Robots that choose to follow the instructions try to fetch this file and read the instructions before fetching any other file from the web site. If this file doesn't exist, web robots assume that the web owner wishes to provide no specific instructions, and crawl the entire site.
A robots.txt file on a website will function as a request that specified robots ignore specified files or directories when crawling a site. This might be, for example, out of a preference for privacy from search engine results, or the belief that the content of the selected directories might be misleading or irrelevant to the categorization of the site as a whole, or out of a desire that an application only operate on certain data. Links to pages listed in robots.txt can still appear in search results if they are linked to from a page that is crawled.
A robots.txt file covers one origin. For websites with multiple subdomains, each subdomain must have its own robots.txt file. If example.com had a robots.txt file but a.example.com did not, the rules that would apply for example.com would not apply to a.example.com. In addition, each protocol and port needs its own robots.txt file; http://example.com/robots.txt does not apply to pages under https://example.com:8080/ or https://example.com/.
Despite the use of the terms "allow" and "disallow", the protocol is purely advisory. It relies on the cooperation of the web robot, so that marking an area of a site out of bounds with robots.txt does not guarantee exclusion of all web robots. In particular, malicious web robots are unlikely to honor robots.txt; some may even use the robots.txt as a guide and go straight to the disallowed URLs.
While it is possible to prevent directory searches by anybody including web robots by setting up the security of the server properly, when the disallow directives are provided in the robots.txt file, the existence of these directories is disclosed to everyone.
There is no official standards body or RFC for the robots.txt protocol. It was created by consensus in June 1994 by members of the robots mailing list (email@example.com). The information specifying the parts that should not be accessed is specified in a file called robots.txt in the top-level directory of the website. The robots.txt patterns are matched by simple substring comparisons, so care should be taken to make sure that patterns matching directories have the final '/' character appended, otherwise all files with names starting with that substring will match, rather than just those in the directory intended.
Many robots also pass a special user-agent to the web server when fetching content. A web administrator could also configure the server to automatically return failure (or pass alternative content) when it detects a connection using one of the robots.
This example tells all robots that they can visit all files because the wildcard
* specifies all robots:
User-agent: * Disallow:
The same result can be accomplished with an empty or missing robots.txt file.
This example tells all robots to stay out of a website:
User-agent: * Disallow: /
This example tells all robots not to enter three directories:
User-agent: * Disallow: /cgi-bin/ Disallow: /tmp/ Disallow: /junk/
This example tells all robots to stay away from one specific file:
User-agent: * Disallow: /directory/file.html
Note that all other files in the specified directory will be processed.
This example tells a specific robot to stay out of a website:
User-agent: BadBot # replace 'BadBot' with the actual user-agent of the bot Disallow: /
This example tells two specific robots not to enter one specific directory:
User-agent: BadBot # replace 'BadBot' with the actual user-agent of the bot User-agent: Googlebot Disallow: /private/
Example demonstrating how comments can be used:
# Comments appear after the "#" symbol at the start of a line, or after a directive User-agent: * # match all bots Disallow: / # keep them out
It is also possible to list multiple robots with their own rules. The actual robot string is defined by the crawler. A few sites, such as Google, support several user-agent strings that allow the operator to deny access to a subset of their services by using specific user-agent strings.
Example demonstrating multiple user-agents:
User-agent: googlebot # all Google services Disallow: /private/ # disallow this directory User-agent: googlebot-news # only the news service Disallow: / # disallow everything User-agent: * # any robot Disallow: /something/ # disallow this directory
User-agent: * Crawl-delay:
Some major crawlers support an
Allow directive which can counteract a following
Disallow directive.  This is useful when one tells robots to avoid an entire directory but still wants some HTML documents in that directory crawled and indexed. While by standard implementation the first matching robots.txt pattern always wins, Google's implementation differs in that Allow patterns with equal or more characters in the directive path win over a matching Disallow pattern. Bing uses either the
Disallow directive, whichever is more specific, based on length, like Google.
In order to be compatible to all robots, if one wants to allow single files inside an otherwise disallowed directory, it is necessary to place the Allow directive(s) first, followed by the Disallow, for example:
Allow: /directory1/myfile.html Disallow: /directory1/
This example will Disallow anything in /directory1/ except /directory1/myfile.html, since the latter will match first. The order is only important to robots that follow the standard; in the case of the Google or Bing bots, the order is not important.
Sitemap: http://www.gstatic.com/s2/sitemaps/profiles-sitemap.xml Sitemap: http://www.google.com/hostednews/sitemap_index.xml
Some crawlers (Yandex, Google) support a
Host directive, allowing websites with multiple mirrors to specify their preferred domain.
Note: This is not supported by all crawlers and if used, it should be inserted at the bottom of the robots.txt file after
Universal "*" match
The Robot Exclusion Standard does not mention anything about the "*" character in the
Disallow: statement. Some crawlers like Googlebot recognize strings containing "*", while MSNbot and Teoma interpret it in different ways.
In addition to root-level robots.txt files, robots exclusion directives can be applied at a more granular level through the use of Robots meta tags and X-Robots-Tag HTTP headers. The robots meta tag cannot be used for non-HTML files such as images, text files, or PDF documents. On the other hand, the X-Robots-Tag can be added to non-HTML files by using .htaccess and httpd.conf files.
A "noindex" meta tag:
<meta name="robots" content="noindex" />
A "noindex" HTTP response header:
The X-Robots-Tag is only effective after the page has been requested and the server responds, and the robots meta tag is only effective after the page has loaded, whereas robots.txt is effective before the page is requested. Thus if a page is excluded by a robots.txt file, any robots meta tags or X-Robots-Tag headers are effectively ignored because the robot will not see them in the first place. Even if a robot honors robots.txt, it is still possible for the robot to find and index a disallowed URL from other places on the web. This can be prevented by using robots.txt directives in combination with robots meta tags or X-Robots-Tag headers.
- Automated Content Access Protocol - a failed proposal to extend robots.txt
- BotSeer - now inactive search engine for robots.txt files
- Distributed web crawling
- Focused crawler
- Internet Archive
- Library of Congress Digital Library project
- National Digital Information Infrastructure and Preservation Program
- Spider trap
- Web archiving
- Web crawler
- Meta Elements for Search Engines
- Martijn, Koster. "Martijn Koster".
- Fielding, Roy (1994). "Maintaining Distributed Hypertext Infostructures: Welcome to MOMspider's Web" (POSTSCRIPT). First International Conference on the World Wide Web. Geneva. Retrieved September 25, 2013.
- "The Web Robots Pages". Robotstxt.org. 1994-06-30. Retrieved 2013-12-29.
- Koster, Martijn (25 February 1994). "Important: Spiders, Robots and Web Wanderers" (HYPERMAIL ARCHIVED MESSAGE). www-talk mailing list. Retrieved October 25, 2013.
- "How I got here in the end, part five: "things can only get better!"". Charlie's Diary. 19 June 2006. Retrieved 19 April 2014.
- "Uncrawled URLs in search results". YouTube. Oct 5, 2009. Retrieved 2013-12-29.
- "About Ask.com: Webmasters". Retrieved 16 February 2013.
- "About AOL Search". Retrieved 16 February 2013.
- "Baiduspider". Retrieved 16 February 2013.
- "Robots Exclusion Protocol - joining together to provide better documentation". Retrieved 16 February 2013.
- "Google Developers - Robots.txt Specifications". Retrieved 16 February 2013.
- "Submitting your website to Yahoo! Search". Retrieved 16 February 2013.
- "Using robots.txt". Retrieved 16 February 2013.
- "List of User-Agents (Spiders, Robots, Browser)". User-agents.org. Retrieved 2013-12-29.
- "Access Control - Apache HTTP Server". Httpd.apache.org. Retrieved 2013-12-29.
- "Deny Strings for Filtering Rules : The Official Microsoft IIS Site". Iis.net. 2013-11-06. Retrieved 2013-12-29.
- Rick DeJarnette (10 August 2009). "Crawl delay and the Bing crawler, MSNBot". Retrieved 16 February 2013.
- "Webmaster Help Center - How do I block Googlebot?". Retrieved 2007-11-20.
- "How do I prevent my site or certain subdirectories from being crawled? - Yahoo Search Help". Retrieved 2007-11-20.
- "Google's Hidden Interpretation of Robots.txt". Retrieved 2010-11-15.
- "Yahoo! Search Blog - Webmasters can now auto-discover with Sitemaps". Retrieved 2009-03-23.
- "Yandex - Using robots.txt". Retrieved 2013-05-13.
- "Search engines and dynamic content issues". MSNbot issues with robots.txt. Retrieved 2007-04-01.
- "Robots meta tag and X-Robots-Tag HTTP header specifications - Webmasters — Google Developers".
- "Block or remove pages using a robots.txt file". Google. Retrieved 16 March 2014.