Integer (computer science)

related topics
{math, number, function}
{system, computer, user}
{style, bgcolor, rowspan}
{language, word, form}

In computer science, the term integer is used to refer to a data type which represents some finite subset of the mathematical integers. These are also known as integral data types.[1]

Contents

Value and representation

The value of a datum with an integral type is the mathematical integer that it corresponds to. Integral types may be unsigned (capable of representing only non-negative integers) or signed (capable of representing negative integers as well).

An integer is typically specified in the program as a sequence of digits, without spaces or thousands separators, optionally prefixed with + or -. Sometimes also alternative notations are allowed, such as hexadecimal (base 16) or octal (base 8).

The internal representation of this datum is the way the value is stored in the computer’s memory. Unlike mathematical integers, a typical datum in a computer has some minimal and maximum possible value. Typically all integers from the minimum through the maximum can be represented.

The maximum is sometimes called MAXINT or—as in the C standard library limits.h header—INT_MAX.

The most common representation of a positive integer is a string of bits, using the binary numeral system. The order of the memory bytes storing the bits varies; see endianness. The width or precision of an integral type is the number of bits in its representation. An integral type with n bits can encode 2n numbers; for example an unsigned type typically represents the non-negative values 0 through 2n−1.

There are four different ways to represent negative numbers in a binary numeral system. The most common is two’s complement, which allows a signed integral type with n bits to represent numbers from −2(n−1) through 2(n−1)−1. Two’s complement arithmetic is convenient because there is a perfect one-to-one correspondence between representations and values (in particular, no separate +0 and -0), and because addition, subtraction and multiplication do not need to distinguish between signed and unsigned types. The other possibilities are offset binary, sign-magnitude and ones' complement. See Signed number representations for details.

Another, rather different, representation for integers is binary-coded decimal, which is still commonly used in mainframe financial applications and in databases.

Common integral data types

Different CPUs support different integral data types. Typically, hardware will support both signed and unsigned types but only a small, fixed set of widths.

The table above lists integral type widths that are supported in hardware by common processors. High level programming languages provide more possibilities. It is common to have a ‘double width’ integral type that has twice as many bits as the biggest hardware-supported type. Many languages also have bit-field types (a specified number of bits, usually constrained to be less than the maximum hardware-supported width) and range types (which can represent only the integers in a specified range).

Full article ▸

related documents
Sheffer stroke
Haskell (programming language)
Pseudorandom number generator
Character encodings in HTML
Associative array
Topological vector space
B-spline
Banach space
A* search algorithm
Normal space
Document Type Definition
Graph theory
Universal quantification
Partially ordered set
Ordered pair
Optimization (mathematics)
Free group
Minimum spanning tree
Stokes' theorem
Algebraically closed field
LL parser
Expander graph
Henri Lebesgue
Direct product
NP (complexity)
Parameter
Empty set
Net (mathematics)
Greatest common divisor
Polish notation