Skip to Main Content
Suppose that we are given two sources S1, S2 which both send us information from the data space X . We assume that we lossy-code information coming from S1 and S2 with the same maximal error but with different alphabets P1 and P2, respectively. Consider a new source S which sends a signal produced by source S1 with probability a1 and by source S2 with probability a2=1-a1 . We provide a simple greedy algorithm which constructs a coding alphabet P which encodes data from S with the same value of maximal error as single sources, such that the entropy h(S;P) satisfies: h(S;P) ≤ a1h(S1;P1)+a2h(S2;P2) +1. In the proof of the aforementioned formula, the basic role is played by a new equivalent definition of entropy based on measures instead of partitions. As a consequence, we decompose the entropy dimension of the mixture of sources in terms of the convex combination of the entropy dimensions of the single sources. In the case of probability measures in BBRN, this allows us to link the upper local dimension at point with the upper entropy dimension of a measure by an improved version of Young estimation.