Stochastic gradient algorithms are widely used in signal processing. Whereas stopping rules for deterministic descent algorithms can easily be constructed, using for instance the norm of the gradient of the objective function, the situation is more complicated for stochastic methods since the gradient needs first to be estimated. We show how a simple Kalman filter can be used to estimate the gradient, with some associated confidence, and thus construct a stopping rule for the algorithm. The construction is illustrated by a simple example. The filter might also be used to estimate the Hessian, which would open the way to a possible acceleration of the algorithm. Such developments are briefly discussed.