Regular expression
In computing, a regular expression is a specific pattern that provides concise and flexible means to "match" (specify and recognize) strings of text, such as particular characters, words, or patterns of characters. Common abbreviations for "regular expression" include regex and regexp.
The concept of regular expressions was first popularized by utilities provided with Unix distributions, in particular the editor ed and the filter grep.[citation needed] A regular expression is written in a formal language that can be interpreted by a regular expression processor, which is a program that either serves as a parser generator or examines text and identifies parts that match the provided specification. Historically, the concept of regular expressions is associated with Kleene's formalism of regular sets, introduced in the 1950s.
The following are examples of specifications which can be expressed as a regular expression:
- The sequence of characters "car" appearing consecutively, such as in "car", "cartoon", or "bicarbonate"
- The word "car" when it appears as an isolated word (and delimited from other words, typically through whitespace characters)
- The word "car" when preceded by the word "motor" (and separated by a named delimiter, or multiple.)
Regular expressions are used by many text editors, utilities, and programming languages to search and manipulate text based on patterns. Some of these languages, including Perl, Ruby, AWK, and Tcl, integrate regular expressions into the syntax of the core language itself. Other programming languages like .NET languages, Java, and Python instead provide regular expressions through standard libraries. For yet other languages, such as Object Pascal (Delphi) and C and C++, non-core libraries are available (however, version C++11 provides regular expressions in its Standard Libraries).
As an example of the syntax, the regular expression \bex
can be used to search for all instances of the string "ex" occurring after "word boundaries". Thus \bex
will find the matching string "ex" in two possible locations, (1) at the beginning of words, and (2) between two characters in a string, where the first is not a word character and the second is a word character. For instance, in case (1), the string "Texts for experts", \bex
matches the "ex" in "experts" but not in "Texts" (because the "ex" occurs inside a word and not immediately after a word boundary). Similarly, in case (2), the string "&experts", \bex
matches the "ex" in "experts" due to the word boundary at '&'.
Many modern computing systems provide wildcard characters in matching filenames from a file system. This is a core capability of many command-line shells and is also known as globbing. Wildcards differ from regular expressions in generally expressing only limited forms of patterns.
History
The origins of regular expressions lie in automata theory and formal language theory, both of which are part of theoretical computer science. These fields study models of computation (automata) and ways to describe and classify formal languages. In the 1950s, mathematician Stephen Cole Kleene described these models using his mathematical notation called regular sets.[1] The SNOBOL language was an early implementation of pattern matching, but not identical to regular expressions. Ken Thompson built Kleene's notation into the editor QED as a means to match patterns in text files. He later added this capability to the Unix editor ed, which eventually led to the popular search tool grep's use of regular expressions ("grep" is a word derived from the command for regular expression searching in the ed editor: g/re/p
where re stands for regular expression[2]). Since that time, many variations of Thompson's original adaptation of regular expressions have been widely used in Unix and Unix-like utilities including expr, AWK, Emacs, vi, and lex.
Perl and Tcl regular expressions were derived from a regex library written by Henry Spencer, though Perl later expanded on Spencer's library to add many new features.[3] Philip Hazel developed PCRE (Perl Compatible Regular Expressions), which attempts to closely mimic Perl's regular expression functionality and is used by many modern tools including PHP and Apache HTTP Server. Part of the effort in the design of Perl 6 is to improve Perl's regular expression integration, and to increase their scope and capabilities to allow the definition of parsing expression grammars.[4] The result is a mini-language called Perl 6 rules, which are used to define Perl 6 grammar as well as provide a tool to programmers in the language. These rules maintain existing features of Perl 5.x regular expressions, but also allow BNF-style definition of a recursive descent parser via sub-rules.
The use of regular expressions in structured information standards for document and database modeling started in the 1960s and expanded in the 1980s when industry standards like ISO SGML (precursored by ANSI "GCA 101-1983") consolidated. The kernel of the structure specification language standards consists of regular expressions. Its use is evident in the DTD element group syntax.
Basic concepts
A regular expression, often called a pattern, is an expression that specifies a set of strings. To specify such sets of strings, rules are often more concise than lists of a set's members. For example, the set containing the three strings "Handel", "Händel", and "Haendel" can be specified by the pattern H(ä|ae?)ndel
(or alternatively, it is said that the pattern matches each of the three strings). In most formalisms, if there exists at least one regex that matches a particular set then there exist an infinite number of such expressions. Most formalisms provide the following operations to construct regular expressions.
- Boolean "or"
- A vertical bar separates alternatives. For example,
gray|grey
can match "gray" or "grey". - Grouping
- Parentheses are used to define the scope and precedence of the operators (among other uses). For example,
gray|grey
andgr(a|e)y
are equivalent patterns which both describe the set of "gray" or "grey". - Quantification
- A quantifier after a token (such as a character) or group specifies how often that preceding element is allowed to occur. The most common quantifiers are the question mark
?
, the asterisk*
(derived from the Kleene star), and the plus sign+
(Kleene cross).
?
The question mark indicates there is zero or one of the preceding element. For example, colou?r
matches both "color" and "colour".*
The asterisk indicates there is zero or more of the preceding element. For example, ab*c
matches "ac", "abc", "abbc", "abbbc", and so on.+
The plus sign indicates there is one or more of the preceding element. For example, ab+c
matches "abc", "abbc", "abbbc", and so on, but not "ac".
These constructions can be combined to form arbitrarily complex expressions, much like one can construct arithmetical expressions from numbers and the operations +, −, ×, and ÷. For example, H(ae?|ä)ndel
and H(a|ae|ä)ndel
are both valid patterns which match the same strings as the earlier example, H(ä|ae?)ndel
.
The precise syntax for regular expressions varies among tools and with context; more detail is given in the Syntax section.
Formal language theory
Regular expressions describe regular languages in formal language theory. They have the same expressive power as regular grammars.
Formal definition
Regular expressions consist of constants and operator symbols that denote sets of strings and operations over these sets, respectively. The following definition is standard, and found as such in most textbooks on formal language theory.[5][6] Given a finite alphabet Σ, the following constants are defined as regular expressions:
- (empty set) ∅ denoting the set ∅.
- (empty string) ε denoting the set containing only the "empty" string, which has no characters at all.
- (literal character)
a
in Σ denoting the set containing only the character a.
Given regular expressions R and S, the following operations over them are defined to produce regular expressions:
- (concatenation) RS denoting the set { αβ | α in set described by expression R and β in set described by S }. For example {"ab", "c"}{"d", "ef"} = {"abd", "abef", "cd", "cef"}.
- (alternation) R | S denoting the set union of sets described by R and S. For example, if R describes {"ab", "c"} and S describes {"ab", "d", "ef"}, expression R | S describes {"ab", "c", "d", "ef"}.
- (Kleene star) R* denoting the smallest superset of set described by R that contains ε and is closed under string concatenation. This is the set of all strings that can be made by concatenating any finite number (including zero) of strings from set described by R. For example, {"0","1"}* is the set of all finite binary strings (including the empty string), and {"ab", "c"}* = {ε, "ab", "c", "abab", "abc", "cab", "cc", "ababab", "abcab", ... }.
To avoid parentheses it is assumed that the Kleene star has the highest priority, then concatenation and then alternation. If there is no ambiguity then parentheses may be omitted. For example, (ab)c
can be written as abc
, and a|(b(c*))
can be written as a|bc*
.
Many textbooks use the symbols ∪, +, or ∨ for alternation instead of the vertical bar.
Examples:
a|b*
denotes {ε, "a", "b", "bb", "bbb", ...}(a|b)*
denotes the set of all strings with no symbols other than "a" and "b", including the empty string: {ε, "a", "b", "aa", "ab", "ba", "bb", "aaa", ...}ab*(c|ε)
denotes the set of strings starting with "a", then zero or more "b"s and finally optionally a "c": {"a", "ac", "ab", "abc", "abb", "abbc", ...}
Expressive power and compactness
The formal definition of regular expressions is purposely parsimonious and avoids defining the redundant quantifiers ?
and +
, which can be expressed as follows: a+
= aa*
, and a?
= (a|ε)
. Sometimes the complement operator is added, to give a generalized regular expression; here Rc matches all strings over Σ* that do not match R. In principle, the complement operator is redundant, as it can always be circumscribed by using the other operators. However, the process for computing such a representation is complex, and the result may require expressions of a size that is double exponentially larger.[7][8]
Regular expressions in this sense can express the regular languages, exactly the class of languages accepted by deterministic finite automata. There is, however, a significant difference in compactness. Some classes of regular languages can only be described by deterministic finite automata whose size grows exponentially in the size of the shortest equivalent regular expressions. The standard example here is the languages Lk consisting of all strings over the alphabet {a,b} whose kth-from-last letter equals a. On one hand, a regular expression describing L4 is given by . Generalizing this pattern to Lk gives the expression
On the other hand, it is known that every deterministic finite automaton accepting the language Lk must have at least 2k states. Luckily, there is a simple mapping from regular expressions to the more general nondeterministic finite automata (NFAs) that does not lead to such a blowup in size; for this reason NFAs are often used as alternative representations of regular languages. NFAs are a simple variation of the type-3 grammars of the Chomsky hierarchy.[5]
Finally, it is worth noting that many real-world "regular expression" engines implement features that cannot be described by the regular expressions in the sense of formal language theory; see below for more on this.
Deciding equivalence of regular expressions
As seen in many of the examples above, there is more than one way to construct a regular expression to achieve the same results.
It is possible to write an algorithm which for two given regular expressions decides whether the described languages are essentially equal, reduces each expression to a minimal deterministic finite state machine, and determines whether they are isomorphic (equivalent).
The redundancy can be eliminated by using Kleene star and set union to find an interesting subset of regular expressions that is still fully expressive, but perhaps their use can be restricted. This is a surprisingly difficult problem. As simple as the regular expressions are, there is no method to systematically rewrite them to some normal form. The lack of axiom in the past led to the star height problem. In 1991, Dexter Kozen axiomatized regular expressions with Kleene algebra.[9]
Syntax
A number of special characters or meta characters are used to denote actions or delimit groups; but it is possible to force these special characters to be interpreted as normal characters by preceding them with a defined escape character, usually the backslash "\". For example, a dot is normally used as a "wild card" metacharacter to denote any character, but if preceded by a backslash it represents the dot character itself. The pattern c.t
matches "cat", "cot", "cut", and non-words such as "czt" and "c.t"; but c\.t
matches only "c.t". The backslash also escapes itself, i.e., two backslashes are interpreted as a literal backslash character.
POSIX
POSIX Basic Regular Expressions
Traditional Unix regular expression syntax followed common conventions but often differed from tool to tool. The IEEE POSIX Basic Regular Expressions (BRE) standard (ISO/IEC 9945-2:1993 Information technology – Portable Operating System Interface (POSIX) – Part 2: Shell and Utilities, successively revised as ISO/IEC 9945-2:2002 Information technology – Portable Operating System Interface (POSIX) – Part 2: System Interfaces, ISO/IEC 9945-2:2003, and currently ISO/IEC/IEEE 9945:2009 Information technology – Portable Operating System Interface (POSIX®) Base Specifications, Issue 7) was designed mostly for backward compatibility with the traditional (Simple Regular Expression) syntax but provided a common standard which has since been adopted as the default syntax of many Unix regular expression tools, though there is often some variation or additional features.
BRE was released alongside an alternative flavor called Extended Regular Expressions or ERE. Many Unix tools also provide support for ERE syntax with command line arguments.
In the BRE syntax, most characters are treated as literals — they match only themselves (e.g., a
matches "a"). The exceptions, listed below, are called metacharacters or metasequences.
Metacharacter | Description |
---|---|
.
|
Matches any single character (many applications exclude newlines, and exactly which characters are considered newlines is flavor-, character-encoding-, and platform-specific, but it is safe to assume that the line feed character is included). Within POSIX bracket expressions, the dot character matches a literal dot. For example, a.c matches "abc", etc., but [a.c] matches only "a", ".", or "c".
|
[ ]
|
A bracket expression. Matches a single character that is contained within the brackets. For example, [abc] matches "a", "b", or "c". [a-z] specifies a range which matches any lowercase letter from "a" to "z". These forms can be mixed: [abcx-z] matches "a", "b", "c", "x", "y", or "z", as does [a-cx-z] .
The |
[^ ]
|
Matches a single character that is not contained within the brackets. For example, [^abc] matches any character other than "a", "b", or "c". [^a-z] matches any single character that is not a lowercase letter from "a" to "z". Likewise, literal characters and ranges can be mixed.
|
^
|
Matches the starting position within the string. In line-based tools, it matches the starting position of any line. |
$
|
Matches the ending position of the string or the position just before a string-ending newline. In line-based tools, it matches the ending position of any line. |
BRE: \( \) ERE: ( )
|
Defines a marked subexpression. The string matched within the parentheses can be recalled later (see the next entry, \n ). A marked subexpression is also called a block or capturing group.
|
\n
|
Matches what the nth marked subexpression matched, where n is a digit from 1 to 9. This construct is theoretically irregular and was not adopted in the POSIX ERE syntax. Some tools allow referencing more than nine capturing groups. |
*
|
Matches the preceding element zero or more times. For example, ab*c matches "ac", "abc", "abbbc", etc. [xyz]* matches "", "x", "y", "z", "zx", "zyx", "xyzzy", and so on. (ab)* matches "", "ab", "abab", "ababab", and so on.
|
BRE: \{m,n\} ERE: {m,n}
|
Matches the preceding element at least m and not more than n times. For example, a{3,5} matches only "aaa", "aaaa", and "aaaaa". This is not found in a few older instances of regular expressions.
|
Examples:
.at
matches any three-character string ending with "at", including "hat", "cat", and "bat".[hc]at
matches "hat" and "cat".[^b]at
matches all strings matched by.at
except "bat".[^hc]at
matches all strings matched by.at
other than "hat" and "cat".^[hc]at
matches "hat" and "cat", but only at the beginning of the string or line.[hc]at$
matches "hat" and "cat", but only at the end of the string or line.\[.\]
matches any single character surrounded by "[" and "]" since the brackets are escaped, for example: "[a]" and "[b]".
POSIX Extended Regular Expressions
The meaning of metacharacters escaped with a backslash is reversed for some characters in the POSIX Extended Regular Expression (ERE) syntax. With this syntax, a backslash causes the metacharacter to be treated as a literal character. So, for example, \( \)
is now ( )
and \{ \}
is now { }
. Additionally, support is removed for
\n
backreferences and the following metacharacters are added:
Metacharacter
Description
?
Matches the preceding element zero or one time. For example, ba?
matches "b" or "ba".
+
Matches the preceding element one or more times. For example, ba+
matches "ba", "baa", "baaa", and so on.
|
The choice (also known as alternation or set union) operator matches either the expression before or the expression after the operator. For example, abc|def
matches "abc" or "def".
Examples:
[hc]+at matches "hat", "cat", "hhat", "chat", "hcat", "ccchat", and so on, but not "at".
[hc]?at
matches "hat", "cat", and "at".
[hc]*at matches "hat", "cat", "hhat", "chat", "hcat", "ccchat", "at", and so on.
cat|dog
matches "cat" or "dog".
POSIX Extended Regular Expressions can often be used with modern Unix utilities by including the command line flag -E.
POSIX character classes
Since many ranges of characters depend on the chosen locale setting (i.e., in some settings letters are organized as abc...zABC...Z, while in some others as aAbBcC...zZ), the POSIX standard defines some classes or categories of characters as shown in the following table:
POSIX
Non-standard
Perl
Vim
ASCII
Description
[:alnum:]
[A-Za-z0-9]
Alphanumeric characters
[:word:]
\w
\w
[A-Za-z0-9_]
Alphanumeric characters plus "_"
\W
\W
[^A-Za-z0-9_]
Non-word characters
[:alpha:]
\a
[A-Za-z]
Alphabetic characters
[:blank:]
[ \t]
Space and tab
\b
\< \>
(?<=\W)(?=\w)|(?<=\w)(?=\W)
Word boundaries
[:cntrl:]
[\x00-\x1F\x7F]
Control characters
[:digit:]
\d
\d
[0-9]
Digits
\D
\D
[^0-9]
Non-digits
[:graph:]
[\x21-\x7E]
Visible characters
[:lower:]
\l
[a-z]
Lowercase letters
[:print:]
\p
[\x20-\x7E]
Visible characters and the space character
[:punct:]
[\]\[!"#$%&'()*+,./:;<=>?@\^_`{|}~-]
Punctuation characters
[:space:]
\s
\s
[ \t\r\n\v\f]
Whitespace characters
\S
\S
[^ \t\r\n\v\f]
Non-whitespace characters
[:upper:]
\u
[A-Z]
Uppercase letters
[:xdigit:]
\x
[A-Fa-f0-9]
Hexadecimal digits
POSIX character classes can only be used within bracket expressions. For example, [[:upper:]ab]
matches the uppercase letters and lowercase "a" and "b".
An additional non-POSIX class understood by some tools is [:word:]
, which is usually defined as [:alnum:]
plus underscore. This reflects the fact that in many programming languages these are the characters that may be used in identifiers. The editor Vim further distinguishes word and word-head classes (using the notation \w
and \h
) since in many programming languages the characters that can begin an identifier are not the same as those that can occur in other positions.
Note that what the POSIX regular expression standards call character classes are commonly referred to as POSIX character classes in other regular expression flavors which support them. With most other regular expression flavors, the term character class is used to describe what POSIX calls bracket expressions.
Perl-derived regular expressions
Perl has a more consistent and richer syntax than the POSIX basic (BRE) and extended (ERE) regular expression standards. An example of its consistency is that \
always escapes a non-alphanumeric character. Other examples of functionality possible with Perl but not POSIX-compliant regular expressions is the concept of lazy quantification (see the next section), possessive quantifiers to control backtracking, named capture groups, and recursive patterns.
Largely because of its expressive power, many other utilities and programming languages have adopted syntax similar to Perl's — for example, Java, JavaScript, Python, Ruby, Microsoft's .NET Framework, and the W3C's XML Schema all use regular expression syntax similar to Perl's. Some languages and tools such as Boost and PHP support multiple regular expression flavors. Perl-derivative regular expression implementations are not identical, and all implement no more than a subset of Perl's features, usually those of Perl 5.0, released in 1994. With Perl 5.10, this process has come full circle with Perl incorporating syntactic extensions originally developed in Python and PCRE "Perl Regular Expression Documentation". perldoc.perl.org. Retrieved January 8, 2012.
Simple Regular Expressions
Simple Regular Expressions is a syntax that may be used by historical versions of application programs, and may be supported within some applications for the purpose of providing backward compatibility. It is deprecated.[10]
Lazy quantification
The standard quantifiers in regular expressions are greedy, meaning they match as much as they can. For example, to find the first instance of an item between the angled bracket symbols < > in this example:
Another whale sighting occurred on <January 26>, <2004>.
someone new to regexes would likely come up with the pattern <.*>
or similar. However, instead of the "<January 26>" that might be expected, this pattern will actually return "<January 26>, <2004>" because the *
quantifier is greedy — it will consume as many characters as possible from the input, and "<January 26>, <2004>" has more characters than "<January 26>".
Though this problem can be avoided in a number of ways (e.g., by specifying the text that is not to be matched: <[^>]*>
), modern regular expression tools allow a quantifier to be specified as lazy (also known as non-greedy, reluctant, minimal, or ungreedy) by putting a question mark after the quantifier (e.g., <.*?>
), or by using a modifier which reverses the greediness of quantifiers (though changing the meaning of the standard quantifiers can be confusing). By using a lazy quantifier, the expression tries the minimal match first. Though in the previous example lazy matching is used to select one of many matching results, in some cases it can also be used to improve performance when greedy matching would require more backtracking.
Patterns for non-regular languages
Many features found in modern regular expression libraries provide an expressive power that far exceeds the regular languages. For example, many implementations allow grouping subexpressions with parentheses and recalling the value they match in the same expression (backreferences). This means that, among other things, a pattern can match strings of repeated words like "papa" or "WikiWiki", called squares in formal language theory. The pattern for these strings is (.*)\1
.
The language of squares is not regular, nor is it context-free. Pattern matching with an unbounded number of back references, as supported by numerous modern tools, is NP-complete.[11]
However, many tools, libraries, and engines that provide such constructions still use the term regular expression for their patterns. This has led to a nomenclature where the term regular expression has different meanings in formal language theory and pattern matching. For this reason, some people have taken to using the term regex or simply pattern to describe the latter. Larry Wall, author of the Perl programming language, writes in an essay about the design of Perl 6:
'Regular expressions' [...] are only marginally related to real regular expressions. Nevertheless, the term has grown with the capabilities of our pattern matching engines, so I'm not going to try to fight linguistic necessity here. I will, however, generally call them "regexes" (or "regexen", when I'm in an Anglo-Saxon mood).[4]
Fuzzy Regular Expressions
Variants of regular expressions can be used for working with text in natural language, when it is necessary to take into account possible typos and spelling variants. For example, the text "Julius Caesar" might be a fuzzy match for:
- Gaius Julius Caesar
- Yulius Cesar
- G. Juliy Caezar
In such cases the mechanism implements some fuzzy string matching algorithm and possibly some algorithm for finding the similarity between text fragment and pattern.
This task is closely related to both full text search and named entity recognition.
Some software libraries work with fuzzy regular expressions:
- TRE – well-developed portable free project in C, which uses syntax similar to POSIX
- FREJ – open source project in Java with non-standard syntax (which utilizes prefix, Lisp-like notation), targeted to allow easy use of substitutions of inner matched fragments in outer blocks, but lacks many features of standard regular expressions.
- agrep – command-line utility (proprietary, but free for non-commercial usage).
Implementations and running times
There are at least three different algorithms that decide if and how a given regular expression matches a string.
The oldest and fastest rely on a result in formal language theory that allows every nondeterministic finite automaton (NFA) to be transformed into a deterministic finite automaton (DFA). The DFA can be constructed explicitly and then run on the resulting input string one symbol at a time. Constructing the DFA for a regular expression of size m has the time and memory cost of O(2m), but it can be run on a string of size n in time O(n). An alternative approach is to simulate the NFA directly, essentially building each DFA state on demand and then discarding it at the next step. This keeps the DFA implicit and avoids the exponential construction cost, but running cost rises to O(m2n). The explicit approach is called the DFA algorithm and the implicit approach the NFA algorithm. Adding caching to the NFA algorithm is often called the "lazy DFA" algorithm, or just the DFA algorithm without making a distinction. These algorithms are fast, but using them for recalling grouped subexpressions, lazy quantification, and similar features is tricky.[12][13]
The third algorithm is to match the pattern against the input string by backtracking. This algorithm is commonly called NFA, but this terminology can be confusing. Its running time can be exponential, which simple implementations exhibit when matching against expressions like (a|aa)*b
that contain both alternation and unbounded quantification and force the algorithm to consider an exponentially increasing number of sub-cases. This behavior can cause a security problem called Regular expression Denial of Service.
Although backtracking implementations only give an exponential guarantee in the worst case, they provide much greater flexibility and expressive power. For example, any implementation which allows the use of backreferences, or implements the various extensions introduced by Perl, must include some kind of backtracking. Some implementations try to provide the best of both algorithms by first running a fast DFA algorithm, and revert to a potentially slower backtracking algorithm only when a backreference is encountered during the match.
Unicode
In theoretical terms, any token set can be matched by regular expressions as long as it is pre-defined. In terms of historical implementations, regular expressions were originally written to use ASCII characters as their token set though regular expression libraries have supported numerous other character sets. Many modern regular expression engines offer at least some support for Unicode. In most respects it makes no difference what the character set is, but some issues do arise when extending regular expressions to support Unicode.
- Supported encoding. Some regular expression libraries expect to work on some particular encoding instead of on abstract Unicode characters. Many of these require the UTF-8 encoding, while others might expect UTF-16, or UTF-32. In contrast, Perl and Java are agnostic on encodings, instead operating on decoded characters internally.
- Supported Unicode range. Many regular expression engines support only the Basic Multilingual Plane, that is, the characters which can be encoded with only 16 bits. Currently, only a few regular expression engines (e.g., Perl's and Java's) can handle the full 21-bit Unicode range.
- Extending ASCII-oriented constructs to Unicode. For example, in ASCII-based implementations, character ranges of the form
[x-y]
are valid wherever x and y have code points in the range [0x00,0x7F] and codepoint(x) ≤ codepoint(y). The natural extension of such character ranges to Unicode would simply change the requirement that the endpoints lie in [0x00,0x7F] to the requirement that they lie in [0,0x10FFFF]. However, in practice this is often not the case. Some implementations, such as that of gawk, do not allow character ranges to cross Unicode blocks. A range like [0x61,0x7F] is valid since both endpoints fall within the Basic Latin block, as is [0x0530,0x0560] since both endpoints fall within the Armenian block, but a range like [0x0061,0x0532] is invalid since it includes multiple Unicode blocks. Other engines, such as that of the Vim editor, allow block-crossing but the character values must not be more than 256 apart.[14]
- Case insensitivity. Some case-insensitivity flags affect only the ASCII characters. Other flags affect all characters. Some engines have two different flags, one for ASCII, the other for Unicode. Exactly which characters belong to the POSIX classes also varies.
- Cousins of case insensitivity. As ASCII has case distinction, case insensitivity became a logical feature in text searching. Unicode introduced alphabetic scripts without case like Devanagari. For these, case sensitivity is not applicable. For scripts like Chinese, another distinction seems logical: between traditional and simplified. In Arabic scripts, insensitivity to initial, medial, final, and isolated position may be desired. In Japanese, insensitivity between hiragana and katakana is sometimes useful.
- Normalization. Unicode has combining characters. Like old typewriters, plain letters can be followed by one of more non-spacing symbols (usually diacritics like accent marks) to form a single printing character, but also provides precomposed characters, i.e. characters that already include one or more combining characters. A sequence of a character + combining character should be matched with the identical single precomposed character. The process of standardizing sequences of characters + combining characters is called normalization.
- New control codes. Unicode introduced amongst others, byte order marks and text direction markers. These codes might have to be dealt with in a special way.
- Introduction of character classes for Unicode blocks, scripts, and numerous other character properties. Block properties are much less useful than script properties, because a block can have code points from several different scripts, and a script can have code points from several different blocks.[15] In Perl and the
java.util.regex
library, properties of the form \p{InX}
or \p{Block=X}
match characters in block X and \P{InX}
or \P{Block=X}
matches code points not in that block. Similarly, \p{Armenian}
, \p{IsArmenian}
, or \p{Script=Armenian}
matches any character in the Armenian script. In general, \p{X}
matches any character with either the binary property X or the general category X. For example, \p{Lu}
, \p{Uppercase_Letter}
, or \p{GC=Lu}
matches any upper-case letter. Binary properties that are not general categories include \p{White_Space}
, \p{Alphabetic}
, \p{Math}
, and \p{Dash}
. Examples of non-binary properties are \p{Bidi_Class=Right_to_Left}
, \p{Word_Break=A_Letter}
, and \p{Numeric_Value=10}
.
Uses
Regular expressions are useful in the production of syntax highlighting systems, data validation, and many other tasks.
While regular expressions would be useful on Internet search engines, processing them across the entire database could consume excessive computer resources depending on the complexity and design of the regex. Although in many cases system administrators can run regex-based queries internally, most search engines do not offer regex support to the public. Notable exceptions: Google Code Search, Exalead.
Examples
This article may need to be rewritten to comply with Wikipedia's quality standards. (May 2009)
A regular expression is a string that is used to describe or match a set of strings according to certain syntax rules. The specific syntax rules vary depending on the specific implementation, programming language, or library in use. Additionally, the functionality of regex implementations can vary between versions.
Despite this variability, and because regular expressions can be difficult to both explain and understand without examples, this article provides a basic description of some of the properties of regular expressions by way of illustration.
The following conventions are used in the examples.[16]
metacharacter(s) ;; the metacharacters column specifies the regex syntax being demonstrated
=~ m// ;; indicates a regex match operation in Perl
=~ s/// ;; indicates a regex substitution operation in Perl
Also worth noting is that these regular expressions are all Perl-like syntax. Standard POSIX regular expressions are different.
Unless otherwise indicated, the following examples conform to the Perl programming language, release 5.8.8, January 31, 2006. This means that other implementations may lack support for some parts of the syntax shown here (e.g. basic vs. extended regex, \( \) vs. (), or lack of \d instead of POSIX [:digit:]).
The syntax and conventions used in these examples coincide with that of other programming environments as well (e.g., see Java in a Nutshell — Page 213, Python Scripting for Computational Science — Page 320, Programming PHP — Page 106).
Metacharacter(s)
Description
Example
Note that all the if statements return a TRUE value
.
Normally matches any character except a newline. Within square brackets the dot is literal.
$string1 = "Hello World\n";
if ($string1 =~ m/...../) {
print "$string1 has length >= 5\n";
}
( )
Groups a series of pattern elements to a single element. When you match a pattern within parentheses, you can use any of $1, $2, ... later to refer to the previously matched pattern.
$string1 = "Hello World\n";
if ($string1 =~ m/(H..).(o..)/) {
print "We matched '$1' and '$2'\n";
}
Output:We matched 'Hel' and 'o W'
+
Matches the preceding pattern element one or more times.
$string1 = "Hello World\n";
if ($string1 =~ m/l+/) {
print "There are one or more consecutive letter \"l\"'s in $string1\n";
}
Output:There are one or more consecutive letter "l"'s in Hello World
?
Matches the preceding pattern element zero or one times.
$string1 = "Hello World\n";
if ($string1 =~ m/H.?e/) {
print "There is an 'H' and a 'e' separated by ";
print "0-1 characters (Ex: He Hoe)\n";
}
?
Modifies the *, +, or {M,N}'d regex that comes before
to match as few times as possible.
$string1 = "Hello World\n";
if ($string1 =~ m/(l.+?o)/) {
print "The non-greedy match with 'l' followed by one or ";
print "more characters is 'llo' rather than 'llo wo'.\n";
}
*
Matches the preceding pattern element zero or more times.
$string1 = "Hello World\n";
if ($string1 =~ m/el*o/) {
print "There is an 'e' followed by zero to many ";
print "'l' followed by 'o' (eo, elo, ello, elllo)\n";
}
{M,N}
Denotes the minimum M and the maximum N match count.
$string1 = "Hello World\n";
if ($string1 =~ m/l{1,2}/) {
print "There exists a substring with at least 1 ";
print "and at most 2 l's in $string1\n";
}
[...]
Denotes a set of possible character matches.
$string1 = "Hello World\n";
if ($string1 =~ m/[aeiou]+/) {
print "$string1 contains one or more vowels.\n";
}
|
Separates alternate possibilities.
$string1 = "Hello World\n";
if ($string1 =~ m/(Hello|Hi|Pogo)/) {
print "At least one of Hello, Hi, or Pogo is ";
print "contained in $string1.\n";
}
\b
Matches a zero-width boundary between a word-class character (see next) and either a non-word class character or an edge.
$string1 = "Hello World\n";
if ($string1 =~ m/llo\b/) {
print "There is a word that ends with 'llo'\n";
}
\w
Matches an alphanumeric character, including "_"; same as [A-Za-z0-9_] in ASCII. In Unicode[15] same as [\p{Alphabetic}\p{GC=Mark}\p{GC=Decimal_Number\p{GC=Connector_Punctuation}], where the Alphabetic property contains more than just Letters, and the Decimal_Number property contains more than [0-9].
$string1 = "Hello World\n";
if ($string1 =~ m/\w/) {
print "There is at least one alphanumeric ";
print "character in $string1 (A-Z, a-z, 0-9, _)\n";
}
\W
Matches a non-alphanumeric character, excluding "_"; same as [^A-Za-z0-9_] in ASCII, and [^\p{Alphabetic}\p{GC=Mark}\p{GC=Decimal_Number}\p{GC=Connector_Punctuation}] in Unicode.
$string1 = "Hello World\n";
if ($string1 =~ m/\W/) {
print "The space between Hello and ";
print "World is not alphanumeric\n";
}
\s
Matches a whitespace character, which in ASCII are tab, line feed, form feed, carriage return, and space; in Unicode, also matches no-break spaces, next line, and the variable-width spaces (amongst others).
$string1 = "Hello World\n";
if ($string1 =~ m/\s.*\s/) {
print "There are TWO whitespace characters, which may";
print " be separated by other characters, in $string1";
}
\S
Matches anything BUT a whitespace.
$string1 = "Hello World\n";
if ($string1 =~ m/\S.*\S/) {
print "There are TWO non-whitespace characters, which";
print " may be separated by other characters, in $string1";
}
\d
Matches a digit; same as [0-9] in ASCII; in Unicode, same as the \p{Digit} or \p{GC=Decimal_Number} property, which itself the same as the \p{Numeric_Type=Decimal} property.
$string1 = "99 bottles of beer on the wall.";
if ($string1 =~ m/(\d+)/) {
print "$1 is the first number in '$string1'\n";
}
Output:99 is the first number in '99 bottles of beer on the wall.'
\D
Matches a non-digit; same as [^0-9] in ASCII or \P{Digit} in Unicode.
$string1 = "Hello World\n";
if ($string1 =~ m/\D/) {
print "There is at least one character in $string1";
print " that is not a digit.\n";
}
^
Matches the beginning of a line or string.
$string1 = "Hello World\n";
if ($string1 =~ m/^He/) {
print "$string1 starts with the characters 'He'\n";
}
$
Matches the end of a line or string.
$string1 = "Hello World\n";
if ($string1 =~ m/rld$/) {
print "$string1 is a line or string ";
print "that ends with 'rld'\n";
}
\A
Matches the beginning of a string (but not an internal line).
$string1 = "Hello\nWorld\n";
if ($string1 =~ m/\AH/) {
print "$string1 is a string ";
print "that starts with 'H'\n";
}
\z
Matches the end of a string (but not an internal line).
see Perl Best Practices — Page 240
$string1 = "Hello\nWorld\n";
if ($string1 =~ m/d\n\z/) {
print "$string1 is a string ";
print "that ends with 'd\\n'\n";
}
[^...]
Matches every character except the ones inside brackets.
$string1 = "Hello World\n";
if ($string1 =~ m/[^abc]/) {
print "$string1 contains a character other than ";
print "a, b, and c\n";
}
See also
- Comparison of regular expression engines
- Extended Backus–Naur Form
- List of regular expression software — applications which support regular expressions
- Regular tree grammar
- Regular language
- Regular expression on the semi-automated MediaWiki editor AutoWikiBrowser
Notes
- ^ Kleene (1956)
- ^ Raymond, Eric S. citing Dennis Ritchie (2003). "Jargon File 4.4.7: grep".
- ^ Wall, Larry and the Perl 5 development team (2006). "perlre: Perl regular expressions".
{{cite web}}
: CS1 maint: numeric names: authors list (link)
- ^ a b Wall (2002)
- ^ a b Hopcroft, Motwani & Ullman (2000)
- ^ Sipser (1998)
- ^ Gelade & Neven (2008)
- ^ Gruber & Holzer (2008)
- ^ Kozen (1991)
- ^ The Single Unix Specification (Version 2)
- ^ see Aho (1990) Theorem 6.2
- ^
Cox (2007)
- ^ Laurikari (2009)
- ^ http://vimdoc.sourceforge.net/htmldoc/pattern.html#/%5B%5D
- ^ a b "UTS#18 on Unicode Regular Expressions, Annex A: Character Blocks". Retrieved 2010-02-05.
- ^ The character 'm' is not always required to specify a Perl match operation. For example, m/[^abc]/ could also be rendered as /[^abc]/. The 'm' is only necessary if the user wishes to specify a match operation without using a forward-slash as the regex delimiter. Sometimes it is useful to specify an alternate regex delimiter in order to avoid "delimiter collision". See 'perldoc perlre' for more details.
References
- Aho, Alfred V. (1990). "Algorithms for finding patterns in strings". In van Leeuwen, Jan (ed.). Handbook of Theoretical Computer Science, volume A: Algorithms and Complexity. The MIT Press. pp. 255–300.
{{cite book}}
: Invalid |ref=harv
(help)
- "The Single UNIX ® Specification, Version 2" (Document). The Open Group. 1997.
{{cite document}}
: Invalid |ref=harv
(help); Unknown parameter |contribution=
ignored (help); Unknown parameter |url=
ignored (help)
- "The Open Group Base Specifications Issue 6, IEEE Std 1003.1, 2004 Edition" (Document). The Open Group. 2004.
{{cite document}}
: Invalid |ref=harv
(help); Unknown parameter |contribution=
ignored (help); Unknown parameter |url=
ignored (help)
- Cox, Russ (2007). "Regular Expression Matching Can Be Simple and Fast".
{{cite web}}
: Invalid |ref=harv
(help)
- Forta, Ben (2004). Sams Teach Yourself Regular Expressions in 10 Minutes. Sams. ISBN 0-672-32566-7.
- Friedl, Jeffrey (2002). Mastering Regular Expressions. O'Reilly. ISBN 0-596-00289-0.
- Gelade, Wouter; Neven, Frank (2008). "Succinctness of the Complement and Intersection of Regular Expressions". Proceedings of the 25th International Symposium on Theoretical Aspects of Computer Science (STACS 2008). pp. 325–336.
{{cite conference}}
: Invalid |ref=harv
(help); Unknown parameter |booktitle=
ignored (|book-title=
suggested) (help)
- Gruber, Hermann; Holzer, Markus (2008). "Finite Automata, Digraph Connectivity, and Regular Expression Size" (PDF). Proceedings of the 35th International Colloquium on Automata, Languages and Programming (ICALP 2008). Vol. 5126. pp. 39–50. doi:10.1007/978-3-540-70583-3_4.
{{cite conference}}
: Invalid |ref=harv
(help); Unknown parameter |booktitle=
ignored (|book-title=
suggested) (help)
- Habibi, Mehran (2004). Real World Regular Expressions with Java 1.4. Springer. ISBN 1-59059-107-0.
- Hopcroft, John E.; Motwani, Rajeev; Ullman, Jeffrey D. (2000). Introduction to Automata Theory, Languages, and Computation (2nd ed.). Addison-Wesley.
{{cite book}}
: Invalid |ref=harv
(help)
- Kleene, Stephen C. (1956). "Representation of Events in Nerve Nets and Finite Automata". In Shannon, Claude E.; McCarthy, John (eds.). Automata Studies. Princeton University Press. pp. 3–42.
{{cite book}}
: Invalid |ref=harv
(help)
- Kozen, Dexter (1991). "Proceedings of the 6th Annual IEEE Symposium on Logic in Computer Science (LICS 1991)": 214–225.
{{cite journal}}
: |contribution=
ignored (help); Cite journal requires |journal=
(help); Invalid |ref=harv
(help)
- Laurikari, Ville (2009). "TRE library 0.7.6".
{{cite web}}
: Invalid |ref=harv
(help)
- Liger, Francois (2002). Visual Basic .NET Text Manipulation Handbook. Wrox Press. ISBN 1-86100-730-2.
{{cite book}}
: Unknown parameter |coauthors=
ignored (|author=
suggested) (help)
- Sipser, Michael (1998). "Chapter 1: Regular Languages". Introduction to the Theory of Computation. PWS Publishing. pp. 31–90. ISBN 0-534-94728-X.
{{cite book}}
: Invalid |ref=harv
(help)
- Stubblebine, Tony (2003). Regular Expression Pocket Reference. O'Reilly. ISBN 0-596-00415-X.
- Wall, Larry (2002). "Apocalypse 5: Pattern Matching".
- Goyvaerts, Jan (2009). Regular Expressions Cookbook. [O'reilly]. ISBN 978-0-596-52068-7.
{{cite book}}
: Unknown parameter |coauthors=
ignored (|author=
suggested) (help)
External links
Wikibooks has a book on the topic of: Regular Expressions
The Wikibook R Programming has a page on the topic of: Text Processing
- ISO/IEC 9945-2:1993 Information technology – Portable Operating System Interface (POSIX) – Part 2: Shell and Utilities
- ISO/IEC 9945-2:2002 Information technology – Portable Operating System Interface (POSIX) – Part 2: System Interfaces
- ISO/IEC 9945-2:2003 Information technology – Portable Operating System Interface (POSIX) – Part 2: System Interfaces
- ISO/IEC/IEEE 9945:2009 Information technology – Portable Operating System Interface (POSIX®) Base Specifications, Issue 7
- Java Tutorials: Regular Expressions
- Perl Regular Expressions documentation
- VBScript and Regular Expressions
- .NET Framework Regular Expressions
- Template:Dmoz
- Pattern matching tools and libraries
- Structural Regular Expressions by Rob Pike
- JavaScript Regular Expressions Chapter and RegExp Object Reference at the Mozilla Developer Center