The term computer numbering formats refers to the schemes implemented in digital computer and calculator hardware and software to represent numbers.^{[citation needed]}
Contents
Bits, bytes, nibbles, and unsigned integers
Bits
The concept of a bit can be understood as a value of either 1 or 0, on or off, yes or no, true or false, or encoded by a switch or toggle of some kind. A single bit must represent one of two states:
onedigit binary value: decimal value:
 
0 0
1 1 two distinct values
While a single bit, on its own, is able to represent only two values, a string of two bits together are able to represent twice as many values:
twodigit binary value: decimal value:
 
00 0
01 1
10 2
11 3 four distinct values
A series of three binary digits can likewise designate twice as many distinct values as the twobit string.
threedigit binary value: decimal value:
 
000 0
001 1
010 2
011 3
100 4
101 5
110 6
111 7 eight distinct values
As the number of bits within a sequence goes up, the number of possible 0 and 1 combinations increases exponentially. The examples above show that a single bit allows only two valuecombinations, while two bits combined can make four separate values; three bits yield eight possibilities, and the amount of possible combinations doubles with each binary digit added:
bits in series (b): number of possible values (N):
 
1 2
2 4
3 8
4 16
5 32
6 64
7 128
8 256
... 2^{b} = N
A byte is a sequence of eight bits or binary digits that can represent one of 256 possible values. Modern computers process information in 8bit units, or some other multiple thereof (such as 16, 32, or 64 bits) at a time. A group of 8 bits is now widely used as a fundamental unit, and has been given the name of octet. A computer's smallest addressable memory unit (a byte) is typically an octet, so the word byte is now generally understood to mean an octet.
[edit] Nibbles
A unit of four bits, or half an octet, is often called a nibble (or nybble). It can encode 16 different values, such as the numbers 0 to 15. Any arbitrary sequence of bits could be used in principle, but in practice the most common scheme is:
0000 = decimal 00 1000 = decimal 08
0001 = decimal 01 1001 = decimal 09
0010 = decimal 02 1010 = decimal 10
0011 = decimal 03 1011 = decimal 11
0100 = decimal 04 1100 = decimal 12
0101 = decimal 05 1101 = decimal 13
0110 = decimal 06 1110 = decimal 14
0111 = decimal 07 1111 = decimal 15
This order (rather than gray code) is used because it is a positional notation, like the decimal notation that humans are more used to. For example, given the decimal number:
 7531
is commonly interpreted as:
 (7 × 1000) + (5 × 100) + (3 × 10) + (1 × 1)
or, using powersof10 notation:
 (7 × 10^{3}) + (5 × 10^{2}) + (3 × 10^{1}) + (1 × 10^{0})
(Note that any number to the zero power is 1.)
Each digit in the number represents a value from 0 to 9 (hence ten different possible values) which is why this is called a decimal or base10 number. Each digit also has a weight of a power of ten associated with its position.
Similarly, in the binary number encoding scheme mentioned above, the (decimal) value 13 is encoded as:
Full article ▸
