This paper introduces a new benchmark study to evaluate the performance of landmark-based shape correspondence used for statistical shape analysis. Different from previous shape-correspondence evaluation methods, the proposed benchmark first generates a large set of synthetic shape instances by randomly sampling a given statistical shape model that defines a ground-truth shape space. We then run a test shape-correspondence algorithm on these synthetic shape instances to identify a set of corresponded landmarks. According to the identified corresponded landmarks, we construct a new statistical shape model, which defines a new shape space. We finally compare this new shape space against the ground-truth shape space to determine the performance of the test shape-correspondence algorithm. In this paper, we introduce three new performance measures that are landmark independent to quantify the difference between the ground-truth and the newly derived shape spaces. By introducing a ground-truth shape space that is defined by a statistical shape model and three new landmark-independent performance measures, we believe the proposed benchmark allows for a more objective evaluation of shape correspondence than previous methods. In this paper, we focus on developing the proposed benchmark for 2D shape correspondence. However it can be easily extended to 3D cases.