Jump to content

Character (computing): Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
Line 37: Line 37:
* [http://anubis.dkuug.dk/jtc1/sc2/wg2/docs/tr15285:1998.pdf ISO/IEC TR 15285:1998] summarizes the ISO/IEC's character model, focusing on terminology definitions and differentiating between characters and glyphs
* [http://anubis.dkuug.dk/jtc1/sc2/wg2/docs/tr15285:1998.pdf ISO/IEC TR 15285:1998] summarizes the ISO/IEC's character model, focusing on terminology definitions and differentiating between characters and glyphs


*[http://www.sohbetkalem.com/ twitter]
*[http://www.sohbetkalem.net/ twitter]


{{data types}}
{{data types}}

Revision as of 18:05, 23 May 2011

In computer and machine-based telecommunications terminology, a character is a unit of information that roughly corresponds to a grapheme, grapheme-like unit, or symbol, such as in an alphabet or syllabary in the written form of a natural language.

Examples of characters include letters, numerical digits, and common punctuation marks (such as '.' or '-'). The concept also includes control characters, which do not correspond to symbols in a particular natural language, but rather to other bits of information used to process text in one or more languages. Examples of control characters include carriage return or tab, as well as instructions to printers or other devices that display or otherwise process text.

Characters are typically combined into strings.

Character encoding

Computers and communication equipment represent characters using a character encoding that assigns each character to something — an integer quantity represented by a sequence of bits, typically — that can be stored or transmitted through a network. Two examples of popular encodings are ASCII and the UTF-8 encoding for Unicode. While most character encodings map characters to numbers and/or bit sequences, Morse code instead represents characters using a series of electrical impulses of varying length.

Terminology

Historically, the term character has been widely used by industry professionals to refer to an encoded character, often as defined by the programming language or API). Likewise, character set has been widely used to refer to a specific repertoire of characters that have been mapped to specific bit sequences or numerical codes. The term glyph is used to describe a particular physical appearance of a character. Many computer fonts consist of glyphs that are indexed by the numerical code of the corresponding character.

With the advent and widespread acceptance of Unicode[1] and bit-agnostic encoding forms,[clarification needed], a character is increasingly being seen as a unit of information, independent of any particular visual manifestation. The ISO/IEC 10646 (Unicode) International Standard defines character, or abstract character as "a member of a set of elements used for the organisation, control, or representation of data". Unicode's definition supplements this with explanatory notes that encourage the reader to differentiate between characters, graphemes, and glyphs, among other things.

For example, the Hebrew letter aleph ("א") is often used by mathematicians to denote certain kinds of infinity, but it is also used in ordinary Hebrew text. In Unicode, these two uses are considered different characters, and have two different Unicode numerical identifiers ("code points"), though they may be rendered identically. Conversely, the Chinese logogram for water ("水") may have a slightly different appearance in Japanese texts than it does in Chinese texts, and local typefaces may reflect this. But nonetheless in Unicode they are considered the same character, and share the same code point.

The Unicode standard also differentiates between these abstract characters and coded characters or encoded characters that have been paired with numeric codes that facilitate their representation in computers.

char

A char in the C programming language is a fixed-size byte entity, which at one time was large enough to store a character value from ASCII or other encodings. Since often only 256 different values can be stored in a byte, it is impossible to store characters from Unicode and other modern sets in a char. Instead larger storage units such as wchar_t, or more than one byte per character such as UTF-8, are used.

Unfortunately the fact that a character was stored in a byte led to the two terms being used interchangeably in most documentation. This often makes the documentation confusing and/or misleading, and has also led to extremely inefficient implementations of UTF-8 where offsets are replaced with repetitive counting of characters, and has also led to bugs when different systems disagree on the count.

See also

References

  1. ^ Davis, Mark (2008-05-05). "Moving to Unicode 5.1". Google Blog. Retrieved 2008-09-28.