Skip to Main Content
CGRA has been considered to be an attractive architecture for accelerating data-intensive applications due to the performance and flexibility that it can provide. However, the cache memory that stores the configuration code increases the silicon area significantly, making the architecture less attractive. This paper proposes an approach to saving the cache memory space through code compression for CGRA. It is based on the observation that typical configuration code consists of a repetition of same instruction patterns. Experiments with several applications show that the proposed approach reduces the code size by 56% on average and the required cache area by 26% on average and up to 68.6% when the hardware overhead is taken into account.
Date of Conference: 19-20 July 2011