A-law algorithm
From Wikipedia the free encyclopedia
This article includes a list of references, related reading, or external links, but its sources remain unclear because it lacks inline citations. (February 2013) |
An A-law algorithm is a standard companding algorithm, used in European 8-bit PCM digital communications systems to optimize, i.e. modify, the dynamic range of an analog signal for digitizing. It is one of the two companding algorithms in the G.711 standard from ITU-T, the other being the similar μ-law, used in North America and Japan.
For a given input , the equation for A-law encoding is as follows:
where is the compression parameter. In Europe, .
A-law expansion is given by the inverse function:
The reason for this encoding is that the wide dynamic range of speech does not lend itself well to efficient linear digital encoding. A-law encoding effectively reduces the dynamic range of the signal, thereby increasing the coding efficiency and resulting in a signal-to-distortion ratio that is superior to that obtained by linear encoding for a given number of bits.
Comparison to μ-law
[edit]The μ-law algorithm provides a slightly larger dynamic range than the A-law at the cost of worse proportional distortion for small signals. By convention, A-law is used for an international connection if at least one country uses it.
See also
[edit]- μ-law algorithm
- Dynamic range compression
- Signal compression
- Companding
- G.711
- DS0
- Tapered floating point
External links
[edit]- Waveform Coding Techniques - Has details of implementation (but note that the A-law equation is incorrect)
- A-law implementation in C-language with example code