While many methods have been proposed for detecting disease outbreaks from pre-diagnostic data, their performance is usually not well understood. In this paper, we describe the relationship between forecast accuracy and the detection accuracy of a method. We argue that most existing temporal detection methods for biosurveillance can be characterized as a forecasting component coupled with a monitoring/detection component. We show that improved forecasting results in improved detection and we quantify the relationship between forecast accuracy and detection metrics under different scenarios. The forecast accuracy can then be used to rate an algorithm's expected performance in detecting outbreaks. Simulation is used to compare empirical performance with theoretical results; we also show examples with authentic biosurveillance data.