Also, in Huffman Coding, why do we use fewer bits for high frequency data. I don't get the intuition. Shouldn't it be the opposite or does this have to do something with the psychoacoustic model of the human hearing which says that our cochlea is sensitive to lower frequencies more than higher, hence we use up higher bits for lower frequencies?
Asked
Active
Viewed 93 times
1
-
Equal loudness contours might suggest that at some frequencies there are less total differentiable symbols above some masking floor. – hotpaw2 Feb 01 '14 at 22:38
1 Answers
3
In Huffman coding frequency refers to the number of times a particular sequence of discrete source symbols has appeared in the input. It is not the same concept as frequency in the continuous space such as sound frequency. As such, it has nothing to do with psychoacoustics or the cochlea.
We use fewer bits for the higher frequency source symbols because frequent source symbols have lower entropy and assigning fewer bits to low entropy symbols gives better compression.
Aaron
- 1,412
- 8
- 8
-
Could you tell me a bit more? How does assigning fewer bits to more frequent source give you better compression ? – Ali Gajani Feb 01 '14 at 22:43
-
1The total number of bits used in the compression is the sum over all symbols of the number of bits used to represent that symbol and the number of times that the symbol occurs. If you use more bits for infrequent symbols it won't cost you too much precisely because it is infrequent. Whereas if you use few bits for a common symbol (high frequency) then your savings are multiplied by how many times that symbol occurs. – Aaron Feb 01 '14 at 23:55