# Binary-coded decimal

 related topics {math, number, function} {system, computer, user} {style, bgcolor, rowspan} {mi², represent, 1st} {area, part, region} {@card@, make, design} {law, state, case}

In computing and electronic systems, binary-coded decimal (BCD) (sometimes called natural binary-coded decimal, NBCD) or, in its most common modern implementation, packed decimal, is an encoding for decimal numbers in which each digit is represented by its own binary sequence. Its main virtue is that it allows easy conversion to decimal digits for printing or display, and allows faster decimal calculations. Its drawbacks are a small increase in the complexity of circuits needed to implement mathematical operations. Uncompressed BCD is also a relatively inefficient encoding—it occupies more space than a purely binary representation.

In BCD, a digit is usually represented by four bits which, in general, represent the decimal digits 0 through 9. Other bit combinations are sometimes used for a sign or for other indications (e.g., error or overflow).

Although uncompressed BCD is not as widely used as it once was, decimal fixed-point and floating-point are still important and continue to be used in financial, commercial, and industrial computing.[1]

Recent decimal floating-point representations use base-10 exponents, but not BCD encodings. Current hardware implementations, however, convert the compressed decimal encodings to BCD internally before carrying out computations. Software implementations of decimal arithmetic typically use BCD or some other 10n base, depending on the operation.