Skip to Main Content
To solve the l1-norm minimisation problem, many algorithms, such as the l1-Magic solver, utilise the conjugate gradient (CG) method to speed up implementation. Since the dictionary employed by CG is often dense in `large-scale` mode, the time complexities of these algorithms remain significantly high. As signals can be modelled by a small set of atoms in a dictionary, proposed is a fast sparse representation model (FSRM) that exploits the property and it is shown that the l1-norm minimisation problem can be reduced from a large and dense linear system to a small and sparse one. Experimental results with image recognition demonstrate that the FSRM is able to achieve double-digit gain in speed with comparable accuracy compared with the l1-Magic solver.