Character (computing)

From Wikipedia the free encyclopedia

Diagram of String data in computing. Shows the word "example" with each letter in a separate box. The word "String" is above, referring to the entire sentence. The label "Character" is below and points to an individual box.
A string of seven characters

In computing and telecommunications, a character is the encoded representation of a natural language character (including letter, numeral and punctuation), whitespace (space or tab), or a control character (controls computer hardware that consumes character-based data). A sequence of characters is called a string.

Some character encoding systems represent each character using a fixed number of bits whereas other systems use varying sizes. Various fixed-length sizes were used for now obsolete systems such as the six-bit character code,[1][2] the five-bit Baudot code and even 4-bit systems (with only 16 possible values).[3] The more modern ASCII system uses the 8-bit byte for each character. Today, the Unicode-based UTF-8 encoding uses a varying number of byte-sized code units to define a code point which combine to encode a character.

Terminology

[edit]

Character

[edit]

In general, a character is a symbol (such as a letter or number) that represents information, and in the context of computing is a representation of such a symbol that may be accepted by a computer.[4] A character implies an encoding of information; often as defined by a standard such as ANSI or Unicode.

Character set

[edit]

A character set identifies a repertoire of characters that are each mapped to a unique numeric value.

Glyph

[edit]

Glyph describes a particular visual appearance of a character. Many computer fonts consist of glyphs that are indexed by the numerical code of the corresponding character.

With the advent and widespread acceptance of Unicode[5] and bit-agnostic coded character sets,[clarification needed] a character is increasingly being seen as a unit of information, independent of any particular visual manifestation. The ISO/IEC 10646 (Unicode) International Standard defines character, or abstract character as "a member of a set of elements used for the organization, control, or representation of data". Unicode's definition supplements this with explanatory notes that encourage the reader to differentiate between characters, graphemes, and glyphs, among other things. Such differentiation is an instance of the wider theme of the separation of presentation and content.

For example, the Hebrew letter aleph ("א") is often used by mathematicians to denote certain kinds of infinity (ℵ), but it is also used in ordinary Hebrew text. In Unicode, these two uses are considered different characters, and have two different Unicode numerical identifiers ("code points"), though they may be rendered identically. Conversely, the Chinese logogram for water ("水") may have a slightly different appearance in Japanese texts than it does in Chinese texts, and local typefaces may reflect this. But nonetheless in Unicode they are considered the same character, and share the same code point.

The Unicode standard differentiates between these abstract characters and coded characters or encoded characters that have been paired with numeric codes that facilitate their representation in computers.

Combining character

[edit]

The combining character is addressed by Unicode which allocates a code point to each of:

  • 'i ' (U+0069),
  • the combining diaeresis (U+0308), and
  • 'ï' (U+00EF).

This makes it possible to code the middle character of the word 'naïve' either as a single character 'ï' or as a combination of the character 'i ' with the combining diaeresis: (U+0069 LATIN SMALL LETTER I + U+0308 COMBINING DIAERESIS); this is also rendered as 'ï '.

char

[edit]

In C, char (short for character) is a data type with size one byte,[6][7] but unlike the defacto size of byte as 8 bits, this use of byte is less specific. Byte is defined to be large enough to contain any member of the "basic execution character set". The number of bits used by a compiler is accessible via the CHAR_BIT macro. By far the most common size is 8 bits, and POSIX requires it to be 8 bits.[8] In modern C standards, char is required to hold UTF-8 code units[6][7] which requires a minimum size of 8 bits.

Since a Unicode code point may require as many as 21 bits.[9] the char type is generally not large enough for every character. None-the-less, the char type is well-suited for the UTF-8 encoding where each code point requires 1 to 4 bytes.

The fact that a character was historically stored in a single byte has led to the terms "char" and "character" being used interchangeably and this leads to confusion today when multibyte encodings such as UTF-8 are used. Modern POSIX documentation attempts to fix this by defining "character" as a sequence of one or more bytes representing a single graphic symbol or control code, and uses "byte" when referring to char data.[10][11] However it still contains errors such as defining an array of char as a character array (rather than a byte array).[12]

Unicode can be stored in strings of code units that are larger than char; called wide characters. The original C type was called wchar_t. Due to some platforms defining wchar_t as 16 bits and others defining it as 32 bits, current versions provide unambiguous char16_t and char32_t. Even then the objects being stored might not be characters, for instance the variable-length UTF-16 is often stored in arrays of char16_t.

Other languages also have a char type. Many, including C++, use 8-bit bytes like C.[7] Others, such as Java, use 2-byte, wide storage to more directly accommodate UTF-16.

See also

[edit]

References

[edit]
  1. ^ Dreyfus, Phillippe (1958). "System design of the Gamma 60". Managing Requirements Knowledge, International Workshop on, Los Angeles. New York. pp. 130–133. doi:10.1109/AFIPS.1958.32. […] Internal data code is used: Quantitative (numerical) data are coded in a 4-bit decimal code; qualitative (alpha-numerical) data are coded in a 6-bit alphanumerical code. The internal instruction code means that the instructions are coded in straight binary code.
    As to the internal information length, the information quantum is called a "catena," and it is composed of 24 bits representing either 6 decimal digits, or 4 alphanumerical characters. This quantum must contain a multiple of 4 and 6 bits to represent a whole number of decimal or alphanumeric characters. Twenty-four bits was found to be a good compromise between the minimum 12 bits, which would lead to a too-low transfer flow from a parallel readout core memory, and 36 bits or more, which was judged as too large an information quantum. The catena is to be considered as the equivalent of a character in variable word length machines, but it cannot be called so, as it may contain several characters. It is transferred in series to and from the main memory.
    Not wanting to call a "quantum" a word, or a set of characters a letter, (a word is a word, and a quantum is something else), a new word was made, and it was called a "catena." It is an English word and exists in Webster's although it does not in French. Webster's definition of the word catena is, "a connected series;" therefore, a 24-bit information item. The word catena will be used hereafter.
    The internal code, therefore, has been defined. Now what are the external data codes? These depend primarily upon the information handling device involved. The Gamma 60 [fr] is designed to handle information relevant to any binary coded structure. Thus an 80-column punched card is considered as a 960-bit information item; 12 rows multiplied by 80 columns equals 960 possible punches; is stored as an exact image in 960 magnetic cores of the main memory with 2 card columns occupying one catena. […]
  2. ^ Blaauw, Gerrit Anne; Brooks Jr., Frederick Phillips; Buchholz, Werner (1962), "4: Natural Data Units" (PDF), in Buchholz, Werner (ed.), Planning a Computer System – Project Stretch, McGraw-Hill Book Company, Inc. / The Maple Press Company, York, PA., pp. 39–40, LCCN 61-10466, archived (PDF) from the original on 2017-04-03, retrieved 2017-04-03, […] Terms used here to describe the structure imposed by the machine design, in addition to bit, are listed below.
    Byte denotes a group of bits used to encode a character, or the number of bits transmitted in parallel to and from input-output units. A term other than character is used here because a given character may be represented in different applications by more than one code, and different codes may use different numbers of bits (i.e., different byte sizes). In input-output transmission the grouping of bits may be completely arbitrary and have no relation to actual characters. (The term is coined from bite, but respelled to avoid accidental mutation to bit.)
    A word consists of the number of data bits transmitted in parallel from or to memory in one memory cycle. Word size is thus defined as a structural property of the memory. (The term catena was coined for this purpose by the designers of the Bull GAMMA 60 [fr] computer.)
    Block refers to the number of words transmitted to or from an input-output unit in response to a single input-output instruction. Block size is a structural property of an input-output unit; it may have been fixed by the design or left to be varied by the program. […]
  3. ^ "Terms And Abbreviations". MCS-4 Assembly Language Programming Manual - The INTELLEC 4 Microcomputer System Programming Manual (PDF) (Preliminary ed.). Santa Clara, California, US: Intel Corporation. December 1973. pp. v, 2-6. MCS-030-1273-1. Archived (PDF) from the original on 2020-03-01. Retrieved 2020-03-02. […] Bit - The smallest unit of information which can be represented. (A bit may be in one of two states I 0 or 1). […] Byte - A group of 8 contiguous bits occupying a single memory location. […] Character - A group of 4 contiguous bits of data. […] (NB. This Intel 4004 manual uses the term character referring to 4-bit rather than 8-bit data entities. Intel switched to use the more common term nibble for 4-bit entities in their documentation for the succeeding processor 4040 in 1974 already.)
  4. ^ "Definition of CHARACTER". Merriam-Webster. Retrieved 2018-04-01.
  5. ^ Davis, Mark (2008-05-05). "Moving to Unicode 5.1". Google Blog. Retrieved 2008-09-28.
  6. ^ a b "§5.2.4.2.1 Sizes of integer types <limits.h> / §6.2.5 Types / §6.5.3.4 The sizeof and _Alignof operators". ISO/IEC 9899:2018 - Information technology -- Programming languages -- C. {{cite book}}: |website= ignored (help)
  7. ^ a b c "§1.7 The C++ memory model / §5.3.3 Sizeof". ISO/IEC 14882:2011.
  8. ^ "<limits.h>". pubs.opengroup.org. Retrieved 2018-04-01.
  9. ^ "Glossary of Unicode Terms – Code Point". Retrieved 2019-05-14.
  10. ^ "POSIX definition of Character".
  11. ^ "POSIX strlen reference".
  12. ^ "POSIX definition of Character Array".
[edit]