We examine the problem of classifying biological sequences, and in particular the challenge of generalizing to novel input data. The high dimensionality of sequence results in an extremely sparsely populated input space. This motivates a need for regularization (a form of inductive bias), in order to achieve generalization. We discuss regularization in the context of regular Neural Networks and Deep Belief Networks, and provide experimental results on an example problem of DNA barcoding classification. Our results support the importance of using an effective regularization method, and indicate the adaptive, data-depended regularization mechanism of a DBN is more powerful than the simple methods of model selection / weight decay / early stopping.