Skip to Main Content
Generalization error, the probability of error of a detection rule learned from training samples on new unseen samples, is a fundamental quantity to be characterized. However, characterizations of generalization error in the statistical learning theory literature are often loose and practically unusable for optimizing detection systems. In this work, focusing on learning linear discriminant analysis detection rules from spatially-correlated sensor measurements, a tight generalization error approximation is developed that can be used to optimize the parameters of a sensor network detection system. As such, the approximation is used to optimize network settings. The approximation is also used to derive a detection error exponent and select an optimal subset of deployed sensor nodes. A Gauss-Markov random field is used to model correlation and weak laws of large numbers in geometric probability are employed in the analysis.