By Topic

An elementary derivation of the large deviation rate function for finite state Markov chains

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

1 Author(s)
Vidyasagar, M. ; Erik Jonsson Sch. of Eng. & Comput. Sci., Univ. of Texas at Dallas, Richardson, TX, USA

Nowadays, standard proofs of the large deviation property (LDP) for i.i.d. processes are based on the `method of types' [4], [2]. However, for Markov chains, the proofs in standard texts are not based on the method of types. Instead they make use of more advanced concepts such as the Ga¿rtner-Ellis theorem (see e.g. [6]) or Varadhan's lemma (see e.g. [15]). There is indeed a proof of the LDP for finite state Markov chains based on the method of types [16], but it is not wellknown nor cited in [6]. The principal objective of this paper is therefore to present a first-principles derivation of the LDP for finite state Markov chains, using only simple combinatorial arguments (e.g. the method of types). The approach presented here extends naturally to multi-step Markov chains. We also relate the LDP rate function of a Markov chain to that of its time-reversed version.

Published in:

Decision and Control, 2009 held jointly with the 2009 28th Chinese Control Conference. CDC/CCC 2009. Proceedings of the 48th IEEE Conference on

Date of Conference:

15-18 Dec. 2009