# Μ-law algorithm

 related topics {system, computer, user} {math, number, function} {rate, high, increase} {math, energy, light}

The µ-law algorithm (often u-law, ulaw, mu-law, pronounced /ˈmjuː/) is a companding algorithm, primarily used in the digital telecommunication systems of North America and Japan. Companding algorithms reduce the dynamic range of an audio signal. In analog systems, this can increase the signal-to-noise ratio (SNR) achieved during transmission, and in the digital domain, it can reduce the quantization error (hence increasing signal to quantization noise ratio). These SNR increases can be traded instead for reduced bandwidth for equivalent SNR.

It is similar to the A-law algorithm used in regions where digital telecommunication signals are carried on E-1 circuits, e.g. Europe.

## Contents

### Algorithm Types

There are two forms of this algorithm - an analog version, and a quantized digital version.

### Continuous

For a given input x, the equation for μ-law encoding is[1]

where μ = 255 (8 bits) in the North American and Japanese standards. It is important to note that the range of this function is -1 to 1.

μ-law expansion is then given by the inverse equation:

The equations are culled from Cisco's Waveform Coding Techniques.

### Discrete

This is defined in ITU-T Recommendation G.711 [2]

G.711 is unclear about what the values at the limit of a range code up as. (e.g. whether +31 codes to 0xEF or 0xF0). However G.191 provides example C code for a μ-law encoder which gives the following encoding. Note the difference between the positive and negative ranges. e.g. the negative range corresponding to +30 to +1 is -31 to -2. This is accounted for by the use of a 1's complement (simple bit inversion) rather than 2's complement to convert a negative value to a positive value during encoding.

### Implementation

There are three ways of implementing a μ-law algorithm :

### Usage Justification

This encoding is used because speech has a wide dynamic range. In the analog world, when mixed with a relatively constant background noise source, the finer detail is lost. Given that the precision of the detail is compromised anyway, and assuming that the signal is to be perceived as audio by a human, one can take advantage of the fact that perceived intensity (loudness) is logarithmic[3] by compressing the signal using a logarithmic-response op-amp. In telco circuits, most of the noise is injected on the lines, thus after the compressor, the intended signal will be perceived as significantly louder than the static, compared to an un-compressed source. This became a common telco solution, and thus, prior to common digital usage, the μ-law specification was developed to define an inter-compatible standard.