Skip to Main Content
Super-resolution reconstruction of image sequences is highly dependent on data outliers and on the quality of the motion estimation. This paper addresses the design of the least mean square algorithm applied to super-resolution reconstruction (LMS-SRR). Based on a statistical model for the algorithm behavior, we propose a design strategy to reduce the effects of outliers on the reconstructed image sequence. We show that the proposed strategy leads the algorithm to a close-to-optimum performance in both the transient and the steady-state phases of adaptation in practical situations in which registration errors occur. The analysis also shows that lower values of the step size do not necessarily lead to a better steady-state mean-square error, differently from the traditional LMS behavior.