By Topic

Entropy of the Mixture of Sources and Entropy Dimension

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
Smieja, M. ; Dept. of Math. & Comput. Sci., Jagiellonian Univ., Kraków, Poland ; Tabor, J.

Suppose that we are given two sources S1, S2 which both send us information from the data space X . We assume that we lossy-code information coming from S1 and S2 with the same maximal error but with different alphabets P1 and P2, respectively. Consider a new source S which sends a signal produced by source S1 with probability a1 and by source S2 with probability a2=1-a1 . We provide a simple greedy algorithm which constructs a coding alphabet P which encodes data from S with the same value of maximal error as single sources, such that the entropy h(S;P) satisfies: h(S;P) ≤ a1h(S1;P1)+a2h(S2;P2) +1. In the proof of the aforementioned formula, the basic role is played by a new equivalent definition of entropy based on measures instead of partitions. As a consequence, we decompose the entropy dimension of the mixture of sources in terms of the convex combination of the entropy dimensions of the single sources. In the case of probability measures in BBRN, this allows us to link the upper local dimension at point with the upper entropy dimension of a measure by an improved version of Young estimation.

Published in:

Information Theory, IEEE Transactions on  (Volume:58 ,  Issue: 5 )