In information theory an entropy encoding is a lossless data compression scheme that is independent of the specific characteristics of the medium.
One of the main types of entropy coding creates and assigns a unique prefix code to each unique symbol that occurs in the input. These entropy encoders then compress data by replacing each fixedlength input symbol by the corresponding variablelength prefix codeword. The length of each codeword is approximately proportional to the negative logarithm of the probability. Therefore, the most common symbols use the shortest codes.
According to Shannon's source coding theorem, the optimal code length for a symbol is −log_{b}P, where b is the number of symbols used to make output codes and P is the probability of the input symbol.
Two of the most common entropy encoding techniques are Huffman coding and arithmetic coding. If the approximate entropy characteristics of a data stream are known in advance (especially for signal compression), a simpler static code may be useful. These static codes include universal codes (such as Elias gamma coding or Fibonacci coding) and Golomb codes (such as unary coding or Rice coding).
Entropy as a measure of similarity
Besides using entropy encoding as a way to compress digital data, an entropy encoder can also be used to measure the amount of similarity between streams of data. This is done by generating an entropy coder/compressor for each class of data; unknown data is then classified by feeding the uncompressed data to each compressor and seeing which compressor yields the highest compression. The coder with the best compression is probably the coder trained on the data that was most similar to the unknown data.
External links
An earlier (open content) version of the above article was posted on PlanetMath.
Full article ▸
