Nowadays, standard proofs of the large deviation property (LDP) for i.i.d. processes are based on the `method of types' , . However, for Markov chains, the proofs in standard texts are not based on the method of types. Instead they make use of more advanced concepts such as the GaÂ¿rtner-Ellis theorem (see e.g. ) or Varadhan's lemma (see e.g. ). There is indeed a proof of the LDP for finite state Markov chains based on the method of types , but it is not wellknown nor cited in . The principal objective of this paper is therefore to present a first-principles derivation of the LDP for finite state Markov chains, using only simple combinatorial arguments (e.g. the method of types). The approach presented here extends naturally to multi-step Markov chains. We also relate the LDP rate function of a Markov chain to that of its time-reversed version.