Skip to Main Content
High computational cost impedes the application of sophisticated image interpolation algorithm in time-critical scenario. In quest for practical solution, approximate algorithm is developed to improve speed at the cost of quality. This paper presents a parallel implementation of piecewise autoregressive modeling image interpolation algorithm, using CUDA (Compute Unified Device Architecture) on GPU. A piecewise autoregressive model with parameters adjusted according to local pixels structure is used in image interpolation algorithm. In order to estimate the model parameters and missing pixels jointly in a local window, the gradient descent iterative method is used to solve a non-linear optimization problem. By splitting the image into many small local windows and launching one CUDA thread per local window, image interpolation is parallel processed. Experimental results show that, compared with traditional serial algorithm running on CPU, the parallel implementation of image interpolation algorithm using CUDA on GPU has better performance.