A decision-directed learning strategy is presented to recursively estimate (i.e., track) the time-varying a priori distribution for a multivariate empirical Bayes adaptive classification rule. The problem is formulated by modeling the prior distribution as a finite-state vector Markov chain and using past decisions to estimate the time evolution of the state of this chain. The solution is obtained by implementing an exact recursive nonlinear estimator for the rate vector of a multivariate discrete-time point process representing the decisions. This estimator obtains the Doob decomposition of the decision process with respect to the a-field generated by all past decisions and corresponds to the nonlinear least squares estimate of the prior distribution. Monte Carlo simulation results are provided to assess the performance of the estimator.