URL normalization is the process by which URLs are modified and standardized in a consistent manner. The goal of the normalization process is to transform a URL into a normalized URL so it is possible to determine if two syntactically different URLs may be equivalent.
Search engines employ URL normalization in order to help website pages to get found on search engines easily on the related search terms and to reduce indexing of duplicate pages. Web crawlers perform URL normalization in order to avoid crawling the same resource more than once. Web browsers may perform normalization to determine if a link has been visited or to determine if a page has been cached.
There are several types of normalization that may be performed. Some of them are always semantics preserving and some may not be.
Normalizations that preserve semantics
- Converting the scheme and host to lower case. The scheme and host components of the URL are case-insensitive. Most normalizers will convert them to lowercase. Example:
- Capitalizing letters in escape sequences. All letters within a percent-encoding triplet (e.g., "%3A") are case-insensitive, and should be capitalized. Example:
- Decoding percent-encoded octets of unreserved characters. For consistency, percent-encoded octets in the ranges of ALPHA (
%7A), DIGIT (
%39), hyphen (
%2D), period (
%2E), underscore (
%5F), or tilde (
%7E) should not be created by URI producers and, when found in a URI, should be decoded to their corresponding unreserved characters by URI normalizers. Example:
- Removing the default port. The default port (port 80 for the “http” scheme) may be removed from (or added to) a URL. Example:
Normalizations that usually preserve semantics
For http and https URLs, the following normalizations listed in RFC 3986 may result in equivalent URLs, but are not guaranteed to by the standards:
- Adding trailing / clarify] are indicated with a trailing slash and should be included in URLs. Example: [
- However, there is no way to know if a URL path component represents a directory or not. RFC 3986 notes that if the former URL redirects to the latter URL, then that is an indication that they are equivalent.
- Removing dot-segments. The segments “..” and “.” can be removed from a URL according to the algorithm described in RFC 3986 (or a similar algorithm). Example:
- However, if a removed "
.." component, e.g. "
b/..", is a symlink to a directory with a different parent, eliding "
b/.." will result in a different path and URL. In rare cases depending on the web server, this may even be true for the root directory (e.g. "
//www.example.com/.." may not be equivalent to "
Normalizations that change semantics
Applying the following normalizations result in a semantically different URL although it may refer to the same resource:
- Removing directory index. Default directory indexes are generally not needed in URLs. Examples:
- Removing the fragment. The fragment component of a URL is never seen by the server and can sometimes be removed. Example:
- However, AJAX applications frequently use the value in the fragment.
- Replacing IP with domain name. Check if the IP address maps to a domain name. Example:
- The reverse replacement is rarely safe due to virtual web servers.
- Limiting protocols. Limiting different application layer protocols. For example, the “https” scheme could be replaced with “http”. Example:
- Removing duplicate slashes Paths which include two adjacent slashes could be converted to one. Example:
- Removing or adding “www” as the first domain label. Some websites operate identically in two Internet domains: one whose least significant label is “www” and another whose name is the result of omitting the least significant label from the name of the first, the latter being known as a naked domain. For example,
http://www.example.com/may access the same website. Many websites redirect the user from the www to the non-www address or vice versa. A normalizer may determine if one of these URLs redirects to the other and normalize all URLs appropriately. Example:
- Sorting the query parameters. Some web pages use more than one query parameter in the URL. A normalizer can sort the parameters into alphabetical order (with their values), and reassemble the URL. Example:
- However, the order of parameters in a URL may be significant (this is not defined by the standard) and a web server may allow the same variable to appear multiple times.
- Removing unused query variables. A page may only expect certain parameters to appear in the query; unused parameters can be removed. Example:
- Note that a parameter without a value is not necessarily an unused parameter.
- Removing default query parameters. A default value in the query string may render identically whether it is there or not. Example:
- Removing the "?" when the query is empty. When the query is empty, there may be no need for the "?". Example:
Normalization based on URL lists
Some normalization rules may be developed for specific websites by examining URL lists obtained from previous crawls or web server logs. For example, if the URL
appears in a crawl log several times along with
we may assume that the two URLs are equivalent and can be normalized to one of the URL forms.
Schonfeld et al. (2006) present a heuristic called DustBuster for detecting DUST (different URLs with similar text) rules that can be applied to URL lists. They showed that once the correct DUST rules were found and applied with a normalization algorithm, they were able to find up to 68% of the redundant URLs in a URL list.
- Garg, Anubhav (2015-12-13). "What is URL Optimization in SEO?". Search Engine Stream. Retrieved 2015-12-13.
- RFC 3986, Section 6: Normalization and Comparison
- RFC 3986, Section 2.3.: Unreserved Characters
- "Secure Coding in C and C++" (PDF). Securecoding.cert.org. Retrieved 2013-08-24.
- "jQuery 1.4 $.param demystified". Ben Alman. 2009-12-20. Retrieved 2013-08-24.
- RFC 3986 - Uniform Resource Identifier (URI): Generic Syntax
- Sang Ho Lee, Sung Jin Kim, and Seok Hoo Hong (2005). On URL normalization (PDF). Proceedings of the International Conference on Computational Science and its Applications (ICCSA 2005). pp. 1076–1085.
- Uri Schonfeld, Ziv Bar-Yossef, and Idit Keidar (2006). Do not crawl in the dust: different URLs with similar text. Proceedings of the 15th international conference on World Wide Web. pp. 1015–1016.
- Uri Schonfeld, Ziv Bar-Yossef, and Idit Keidar (2007). Do not crawl in the dust: different URLs with similar text. Proceedings of the 16th international conference on World Wide Web. pp. 111–120.