In computer science, a lexical grammar is a formal grammar defining the syntax of tokens. That is, the rules governing how a character sequence is divided up into subsequences of characters, each part of which represents an individual token. This is frequently defined in terms of regular expressions.
For instance, the lexical grammar for many programming languages specifies that a string literal starts with a " character and continues until a matching " is found (escaping makes this more complicated), that an identifier is an alphanumeric sequence (letters and digits, usually also allowing underscores, and disallowing initial digits), and that an integer literal is a sequence of digits. So in the following character sequence "abc" xyz1 23 the tokens are string, identifier and number (plus whitespace tokens) because the space character terminates the sequence of characters forming the identifier. Further, certain sequences are categorized as keywords – these generally have the same form as identifiers (usually alphabetical words), but are categorized separately; formally they have a different token type.
Regular expressions for common lexical rules follow (for example, C).
Unescaped string literal (quote, followed by non-quotes, ending in a quote):
Escaped string literal (quote, followed sequence of escaped characters or non-quotes, ending in a quote):
Decimal integer integer (no leading zero):
Hexadecimal integer literal:
Octal integer literal:
|This article does not cite any references or sources. (November 2006)|
|This computer-programming-related article is a stub. You can help Wikipedia by expanding it.|