We propose a framework for multitarget tracking with feedback that accounts for scene contextual information. We demonstrate the framework on two types of context-dependent events, namely target births (i.e., objects entering the scene or reappearing after occlusion) and spatially persistent clutter. The spatial distributions of birth and clutter events are incrementally learned based on mixtures of Gaussians. The corresponding models are used by a probability hypothesis density (PHD) filter that spatially modulates its strength based on the learned contextual information. Experimental results on a large video surveillance dataset using a standard evaluation protocol show that the feedback improves the tracking accuracy from 9% to 14% by reducing the number of false detections and false trajectories. This performance improvement is achieved without increasing the computational complexity of the tracker.